2026-04-17 00:00:07.438825 | Job console starting 2026-04-17 00:00:07.458243 | Updating git repos 2026-04-17 00:00:07.517478 | Cloning repos into workspace 2026-04-17 00:00:07.894909 | Restoring repo states 2026-04-17 00:00:07.926278 | Merging changes 2026-04-17 00:00:07.926304 | Checking out repos 2026-04-17 00:00:08.419467 | Preparing playbooks 2026-04-17 00:00:09.320608 | Running Ansible setup 2026-04-17 00:00:16.706449 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-17 00:00:18.178645 | 2026-04-17 00:00:18.178777 | PLAY [Base pre] 2026-04-17 00:00:18.252283 | 2026-04-17 00:00:18.252413 | TASK [Setup log path fact] 2026-04-17 00:00:18.304241 | orchestrator | ok 2026-04-17 00:00:18.387167 | 2026-04-17 00:00:18.387356 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-17 00:00:18.542306 | orchestrator | ok 2026-04-17 00:00:18.577218 | 2026-04-17 00:00:18.577335 | TASK [emit-job-header : Print job information] 2026-04-17 00:00:18.731766 | # Job Information 2026-04-17 00:00:18.731979 | Ansible Version: 2.16.14 2026-04-17 00:00:18.732017 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-17 00:00:18.732068 | Pipeline: periodic-midnight 2026-04-17 00:00:18.732097 | Executor: 521e9411259a 2026-04-17 00:00:18.732117 | Triggered by: https://github.com/osism/testbed 2026-04-17 00:00:18.732138 | Event ID: 3d35b43f50114718a263c5f8a3bb7ce5 2026-04-17 00:00:18.748539 | 2026-04-17 00:00:18.748650 | LOOP [emit-job-header : Print node information] 2026-04-17 00:00:19.085872 | orchestrator | ok: 2026-04-17 00:00:19.086030 | orchestrator | # Node Information 2026-04-17 00:00:19.086074 | orchestrator | Inventory Hostname: orchestrator 2026-04-17 00:00:19.086100 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-17 00:00:19.086122 | orchestrator | Username: zuul-testbed04 2026-04-17 00:00:19.086142 | orchestrator | Distro: Debian 12.13 2026-04-17 00:00:19.086165 | orchestrator | Provider: static-testbed 2026-04-17 00:00:19.086186 | orchestrator | Region: 2026-04-17 00:00:19.086208 | orchestrator | Label: testbed-orchestrator 2026-04-17 00:00:19.086227 | orchestrator | Product Name: OpenStack Nova 2026-04-17 00:00:19.086247 | orchestrator | Interface IP: 81.163.193.140 2026-04-17 00:00:19.098570 | 2026-04-17 00:00:19.098675 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-17 00:00:20.773509 | orchestrator -> localhost | changed 2026-04-17 00:00:20.784160 | 2026-04-17 00:00:20.784272 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-17 00:00:23.009996 | orchestrator -> localhost | changed 2026-04-17 00:00:23.034370 | 2026-04-17 00:00:23.034470 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-17 00:00:23.962380 | orchestrator -> localhost | ok 2026-04-17 00:00:23.968874 | 2026-04-17 00:00:23.968967 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-17 00:00:24.036884 | orchestrator | ok 2026-04-17 00:00:24.065732 | orchestrator | included: /var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-17 00:00:24.088534 | 2026-04-17 00:00:24.088623 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-17 00:00:27.404788 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-17 00:00:27.404954 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/cfb67c0e06f4403695f75a6ddf5ac11e_id_rsa 2026-04-17 00:00:27.404986 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/cfb67c0e06f4403695f75a6ddf5ac11e_id_rsa.pub 2026-04-17 00:00:27.405008 | orchestrator -> localhost | The key fingerprint is: 2026-04-17 00:00:27.405027 | orchestrator -> localhost | SHA256:EyLQTPTQuqGhMCWfAr0rNoRZhy62gdxWKfI/ZJxZq1s zuul-build-sshkey 2026-04-17 00:00:27.405057 | orchestrator -> localhost | The key's randomart image is: 2026-04-17 00:00:27.405083 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-17 00:00:27.405102 | orchestrator -> localhost | | ..*+.. | 2026-04-17 00:00:27.405120 | orchestrator -> localhost | |o.=o+=.. | 2026-04-17 00:00:27.405137 | orchestrator -> localhost | |=B++=.* o | 2026-04-17 00:00:27.405153 | orchestrator -> localhost | |O=*+oB o . | 2026-04-17 00:00:27.405170 | orchestrator -> localhost | |+=+++o. S | 2026-04-17 00:00:27.405191 | orchestrator -> localhost | |o+o .+ E . | 2026-04-17 00:00:27.405207 | orchestrator -> localhost | |... + | 2026-04-17 00:00:27.405223 | orchestrator -> localhost | | . | 2026-04-17 00:00:27.405240 | orchestrator -> localhost | | | 2026-04-17 00:00:27.405257 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-17 00:00:27.405297 | orchestrator -> localhost | ok: Runtime: 0:00:01.707844 2026-04-17 00:00:27.411178 | 2026-04-17 00:00:27.411254 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-17 00:00:27.458461 | orchestrator | ok 2026-04-17 00:00:27.473347 | orchestrator | included: /var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-17 00:00:27.499752 | 2026-04-17 00:00:27.499845 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-17 00:00:27.545385 | orchestrator | skipping: Conditional result was False 2026-04-17 00:00:27.552529 | 2026-04-17 00:00:27.552624 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-17 00:00:28.686487 | orchestrator | changed 2026-04-17 00:00:28.699497 | 2026-04-17 00:00:28.699585 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-17 00:00:29.032238 | orchestrator | ok 2026-04-17 00:00:29.043615 | 2026-04-17 00:00:29.043705 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-17 00:00:29.536216 | orchestrator | ok 2026-04-17 00:00:29.540928 | 2026-04-17 00:00:29.541006 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-17 00:00:30.048890 | orchestrator | ok 2026-04-17 00:00:30.053804 | 2026-04-17 00:00:30.053883 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-17 00:00:30.094884 | orchestrator | skipping: Conditional result was False 2026-04-17 00:00:30.100349 | 2026-04-17 00:00:30.100428 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-17 00:00:31.159266 | orchestrator -> localhost | changed 2026-04-17 00:00:31.195009 | 2026-04-17 00:00:31.195951 | TASK [add-build-sshkey : Add back temp key] 2026-04-17 00:00:32.021766 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/cfb67c0e06f4403695f75a6ddf5ac11e_id_rsa (zuul-build-sshkey) 2026-04-17 00:00:32.021944 | orchestrator -> localhost | ok: Runtime: 0:00:00.049827 2026-04-17 00:00:32.028962 | 2026-04-17 00:00:32.029058 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-17 00:00:32.717184 | orchestrator | ok 2026-04-17 00:00:32.722037 | 2026-04-17 00:00:32.722116 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-17 00:00:32.785362 | orchestrator | skipping: Conditional result was False 2026-04-17 00:00:32.845269 | 2026-04-17 00:00:32.845363 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-17 00:00:33.252342 | orchestrator | ok 2026-04-17 00:00:33.263714 | 2026-04-17 00:00:33.269933 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-17 00:00:33.307620 | orchestrator | ok 2026-04-17 00:00:33.323836 | 2026-04-17 00:00:33.323938 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-17 00:00:34.304389 | orchestrator -> localhost | ok 2026-04-17 00:00:34.315645 | 2026-04-17 00:00:34.315748 | TASK [validate-host : Collect information about the host] 2026-04-17 00:00:35.921635 | orchestrator | ok 2026-04-17 00:00:35.958787 | 2026-04-17 00:00:35.958936 | TASK [validate-host : Sanitize hostname] 2026-04-17 00:00:36.123911 | orchestrator | ok 2026-04-17 00:00:36.128495 | 2026-04-17 00:00:36.128587 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-17 00:00:38.425451 | orchestrator -> localhost | changed 2026-04-17 00:00:38.430473 | 2026-04-17 00:00:38.430555 | TASK [validate-host : Collect information about zuul worker] 2026-04-17 00:00:39.185221 | orchestrator | ok 2026-04-17 00:00:39.189528 | 2026-04-17 00:00:39.189659 | TASK [validate-host : Write out all zuul information for each host] 2026-04-17 00:00:40.199484 | orchestrator -> localhost | changed 2026-04-17 00:00:40.208584 | 2026-04-17 00:00:40.208680 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-17 00:00:40.530461 | orchestrator | ok 2026-04-17 00:00:40.544802 | 2026-04-17 00:00:40.544892 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-17 00:02:05.877872 | orchestrator | changed: 2026-04-17 00:02:05.878161 | orchestrator | .d..t...... src/ 2026-04-17 00:02:05.878352 | orchestrator | .d..t...... src/github.com/ 2026-04-17 00:02:05.878514 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-17 00:02:05.878552 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-17 00:02:05.878580 | orchestrator | RedHat.yml 2026-04-17 00:02:05.895572 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-17 00:02:05.895590 | orchestrator | RedHat.yml 2026-04-17 00:02:05.895641 | orchestrator | = 2.2.0"... 2026-04-17 00:02:17.670099 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-17 00:02:17.687874 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-17 00:02:17.833030 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-17 00:02:18.301355 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-17 00:02:18.369681 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-17 00:02:18.845314 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-17 00:02:18.913564 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-17 00:02:19.616493 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-17 00:02:19.616549 | orchestrator | 2026-04-17 00:02:19.616559 | orchestrator | Providers are signed by their developers. 2026-04-17 00:02:19.616568 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-17 00:02:19.616577 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-17 00:02:19.616591 | orchestrator | 2026-04-17 00:02:19.616600 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-17 00:02:19.616615 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-17 00:02:19.616622 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-17 00:02:19.616629 | orchestrator | you run "tofu init" in the future. 2026-04-17 00:02:19.616966 | orchestrator | 2026-04-17 00:02:19.616988 | orchestrator | OpenTofu has been successfully initialized! 2026-04-17 00:02:19.616995 | orchestrator | 2026-04-17 00:02:19.617000 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-17 00:02:19.617011 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-17 00:02:19.617018 | orchestrator | should now work. 2026-04-17 00:02:19.617024 | orchestrator | 2026-04-17 00:02:19.617030 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-17 00:02:19.617037 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-17 00:02:19.617044 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-17 00:02:19.797201 | orchestrator | Created and switched to workspace "ci"! 2026-04-17 00:02:19.797264 | orchestrator | 2026-04-17 00:02:19.797275 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-17 00:02:19.797285 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-17 00:02:19.797295 | orchestrator | for this configuration. 2026-04-17 00:02:19.900573 | orchestrator | ci.auto.tfvars 2026-04-17 00:02:20.812378 | orchestrator | default_custom.tf 2026-04-17 00:02:21.818985 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-17 00:02:22.342843 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-17 00:02:23.017558 | orchestrator | 2026-04-17 00:02:23.017626 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-17 00:02:23.017634 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-17 00:02:23.017639 | orchestrator | + create 2026-04-17 00:02:23.017644 | orchestrator | <= read (data resources) 2026-04-17 00:02:23.017649 | orchestrator | 2026-04-17 00:02:23.017653 | orchestrator | OpenTofu will perform the following actions: 2026-04-17 00:02:23.017671 | orchestrator | 2026-04-17 00:02:23.017676 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-17 00:02:23.017680 | orchestrator | # (config refers to values not yet known) 2026-04-17 00:02:23.017714 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-17 00:02:23.017719 | orchestrator | + checksum = (known after apply) 2026-04-17 00:02:23.017724 | orchestrator | + created_at = (known after apply) 2026-04-17 00:02:23.017728 | orchestrator | + file = (known after apply) 2026-04-17 00:02:23.017732 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.017754 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.017758 | orchestrator | + min_disk_gb = (known after apply) 2026-04-17 00:02:23.017762 | orchestrator | + min_ram_mb = (known after apply) 2026-04-17 00:02:23.017766 | orchestrator | + most_recent = true 2026-04-17 00:02:23.017770 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.017774 | orchestrator | + protected = (known after apply) 2026-04-17 00:02:23.017778 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.017785 | orchestrator | + schema = (known after apply) 2026-04-17 00:02:23.017789 | orchestrator | + size_bytes = (known after apply) 2026-04-17 00:02:23.017793 | orchestrator | + tags = (known after apply) 2026-04-17 00:02:23.017797 | orchestrator | + updated_at = (known after apply) 2026-04-17 00:02:23.017801 | orchestrator | } 2026-04-17 00:02:23.017836 | orchestrator | 2026-04-17 00:02:23.017840 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-17 00:02:23.017844 | orchestrator | # (config refers to values not yet known) 2026-04-17 00:02:23.017848 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-17 00:02:23.017852 | orchestrator | + checksum = (known after apply) 2026-04-17 00:02:23.017856 | orchestrator | + created_at = (known after apply) 2026-04-17 00:02:23.017860 | orchestrator | + file = (known after apply) 2026-04-17 00:02:23.017864 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.017868 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.017872 | orchestrator | + min_disk_gb = (known after apply) 2026-04-17 00:02:23.017876 | orchestrator | + min_ram_mb = (known after apply) 2026-04-17 00:02:23.017879 | orchestrator | + most_recent = true 2026-04-17 00:02:23.017883 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.017887 | orchestrator | + protected = (known after apply) 2026-04-17 00:02:23.017891 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.017895 | orchestrator | + schema = (known after apply) 2026-04-17 00:02:23.017898 | orchestrator | + size_bytes = (known after apply) 2026-04-17 00:02:23.017902 | orchestrator | + tags = (known after apply) 2026-04-17 00:02:23.017906 | orchestrator | + updated_at = (known after apply) 2026-04-17 00:02:23.017910 | orchestrator | } 2026-04-17 00:02:23.017929 | orchestrator | 2026-04-17 00:02:23.017934 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-17 00:02:23.017938 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-17 00:02:23.017942 | orchestrator | + content = (known after apply) 2026-04-17 00:02:23.017946 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 00:02:23.017950 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 00:02:23.017954 | orchestrator | + content_md5 = (known after apply) 2026-04-17 00:02:23.017957 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 00:02:23.017961 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 00:02:23.017965 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 00:02:23.017969 | orchestrator | + directory_permission = "0777" 2026-04-17 00:02:23.017973 | orchestrator | + file_permission = "0644" 2026-04-17 00:02:23.017977 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-17 00:02:23.017980 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.017984 | orchestrator | } 2026-04-17 00:02:23.018062 | orchestrator | 2026-04-17 00:02:23.018067 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-17 00:02:23.018071 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-17 00:02:23.018075 | orchestrator | + content = (known after apply) 2026-04-17 00:02:23.018079 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 00:02:23.018083 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 00:02:23.018086 | orchestrator | + content_md5 = (known after apply) 2026-04-17 00:02:23.018090 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 00:02:23.018094 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 00:02:23.018102 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 00:02:23.018106 | orchestrator | + directory_permission = "0777" 2026-04-17 00:02:23.018110 | orchestrator | + file_permission = "0644" 2026-04-17 00:02:23.018118 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-17 00:02:23.018122 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.018126 | orchestrator | } 2026-04-17 00:02:23.018132 | orchestrator | 2026-04-17 00:02:23.018136 | orchestrator | # local_file.inventory will be created 2026-04-17 00:02:23.018139 | orchestrator | + resource "local_file" "inventory" { 2026-04-17 00:02:23.018143 | orchestrator | + content = (known after apply) 2026-04-17 00:02:23.018147 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 00:02:23.018151 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 00:02:23.018154 | orchestrator | + content_md5 = (known after apply) 2026-04-17 00:02:23.018158 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 00:02:23.018162 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 00:02:23.018166 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 00:02:23.018170 | orchestrator | + directory_permission = "0777" 2026-04-17 00:02:23.018173 | orchestrator | + file_permission = "0644" 2026-04-17 00:02:23.018177 | orchestrator | + filename = "inventory.ci" 2026-04-17 00:02:23.018181 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.018185 | orchestrator | } 2026-04-17 00:02:23.021956 | orchestrator | 2026-04-17 00:02:23.021979 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-17 00:02:23.021985 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-17 00:02:23.021989 | orchestrator | + content = (sensitive value) 2026-04-17 00:02:23.021994 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-17 00:02:23.021998 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-17 00:02:23.022003 | orchestrator | + content_md5 = (known after apply) 2026-04-17 00:02:23.022007 | orchestrator | + content_sha1 = (known after apply) 2026-04-17 00:02:23.022011 | orchestrator | + content_sha256 = (known after apply) 2026-04-17 00:02:23.022030 | orchestrator | + content_sha512 = (known after apply) 2026-04-17 00:02:23.022035 | orchestrator | + directory_permission = "0700" 2026-04-17 00:02:23.022040 | orchestrator | + file_permission = "0600" 2026-04-17 00:02:23.022044 | orchestrator | + filename = ".id_rsa.ci" 2026-04-17 00:02:23.022049 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022053 | orchestrator | } 2026-04-17 00:02:23.022057 | orchestrator | 2026-04-17 00:02:23.022061 | orchestrator | # null_resource.node_semaphore will be created 2026-04-17 00:02:23.022065 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-17 00:02:23.022069 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022073 | orchestrator | } 2026-04-17 00:02:23.022077 | orchestrator | 2026-04-17 00:02:23.022081 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-17 00:02:23.022086 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-17 00:02:23.022089 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022093 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022097 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022101 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022105 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022109 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-17 00:02:23.022113 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022116 | orchestrator | + size = 80 2026-04-17 00:02:23.022120 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022124 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022128 | orchestrator | } 2026-04-17 00:02:23.022132 | orchestrator | 2026-04-17 00:02:23.022135 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-17 00:02:23.022139 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022143 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022147 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022151 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022163 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022167 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022171 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-17 00:02:23.022175 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022178 | orchestrator | + size = 80 2026-04-17 00:02:23.022182 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022186 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022190 | orchestrator | } 2026-04-17 00:02:23.022194 | orchestrator | 2026-04-17 00:02:23.022197 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-17 00:02:23.022201 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022205 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022209 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022213 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022217 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022220 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022224 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-17 00:02:23.022228 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022232 | orchestrator | + size = 80 2026-04-17 00:02:23.022236 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022239 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022243 | orchestrator | } 2026-04-17 00:02:23.022247 | orchestrator | 2026-04-17 00:02:23.022251 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-17 00:02:23.022255 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022258 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022262 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022266 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022270 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022273 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022277 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-17 00:02:23.022281 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022285 | orchestrator | + size = 80 2026-04-17 00:02:23.022293 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022297 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022301 | orchestrator | } 2026-04-17 00:02:23.022304 | orchestrator | 2026-04-17 00:02:23.022308 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-17 00:02:23.022312 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022316 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022320 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022323 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022327 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022331 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022335 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-17 00:02:23.022339 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022342 | orchestrator | + size = 80 2026-04-17 00:02:23.022346 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022350 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022354 | orchestrator | } 2026-04-17 00:02:23.022357 | orchestrator | 2026-04-17 00:02:23.022361 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-17 00:02:23.022365 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022369 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022373 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022383 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022391 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022395 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022398 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-17 00:02:23.022402 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022406 | orchestrator | + size = 80 2026-04-17 00:02:23.022410 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022414 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022417 | orchestrator | } 2026-04-17 00:02:23.022421 | orchestrator | 2026-04-17 00:02:23.022425 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-17 00:02:23.022429 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-17 00:02:23.022432 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022436 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022440 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022444 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.022448 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022452 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-17 00:02:23.022455 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022459 | orchestrator | + size = 80 2026-04-17 00:02:23.022463 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022467 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022471 | orchestrator | } 2026-04-17 00:02:23.022474 | orchestrator | 2026-04-17 00:02:23.022478 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-17 00:02:23.022483 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022487 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022491 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022495 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022498 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022503 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-17 00:02:23.022506 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022510 | orchestrator | + size = 20 2026-04-17 00:02:23.022514 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022518 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022522 | orchestrator | } 2026-04-17 00:02:23.022525 | orchestrator | 2026-04-17 00:02:23.022529 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-17 00:02:23.022533 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022537 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022540 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022544 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022548 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022552 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-17 00:02:23.022555 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022559 | orchestrator | + size = 20 2026-04-17 00:02:23.022563 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022567 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022570 | orchestrator | } 2026-04-17 00:02:23.022574 | orchestrator | 2026-04-17 00:02:23.022578 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-17 00:02:23.022582 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022585 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022589 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022593 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022597 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022601 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-17 00:02:23.022604 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022611 | orchestrator | + size = 20 2026-04-17 00:02:23.022615 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022619 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022623 | orchestrator | } 2026-04-17 00:02:23.022626 | orchestrator | 2026-04-17 00:02:23.022630 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-17 00:02:23.022634 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022638 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022641 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022645 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022655 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022659 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-17 00:02:23.022663 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022667 | orchestrator | + size = 20 2026-04-17 00:02:23.022671 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022674 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022678 | orchestrator | } 2026-04-17 00:02:23.022682 | orchestrator | 2026-04-17 00:02:23.022712 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-17 00:02:23.022717 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022721 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022724 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022728 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022732 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022736 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-17 00:02:23.022739 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022743 | orchestrator | + size = 20 2026-04-17 00:02:23.022747 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022751 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022755 | orchestrator | } 2026-04-17 00:02:23.022758 | orchestrator | 2026-04-17 00:02:23.022762 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-17 00:02:23.022766 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022770 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022773 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022777 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022781 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022785 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-17 00:02:23.022792 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022796 | orchestrator | + size = 20 2026-04-17 00:02:23.022800 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022804 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022808 | orchestrator | } 2026-04-17 00:02:23.022811 | orchestrator | 2026-04-17 00:02:23.022815 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-17 00:02:23.022819 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022823 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022827 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022830 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022834 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022838 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-17 00:02:23.022842 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022846 | orchestrator | + size = 20 2026-04-17 00:02:23.022849 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022853 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022857 | orchestrator | } 2026-04-17 00:02:23.022861 | orchestrator | 2026-04-17 00:02:23.022864 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-17 00:02:23.022868 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022876 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022880 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022883 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022887 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022891 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-17 00:02:23.022895 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022898 | orchestrator | + size = 20 2026-04-17 00:02:23.022902 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022906 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022910 | orchestrator | } 2026-04-17 00:02:23.022913 | orchestrator | 2026-04-17 00:02:23.022917 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-17 00:02:23.022921 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-17 00:02:23.022925 | orchestrator | + attachment = (known after apply) 2026-04-17 00:02:23.022929 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022932 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.022936 | orchestrator | + metadata = (known after apply) 2026-04-17 00:02:23.022940 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-17 00:02:23.022944 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.022947 | orchestrator | + size = 20 2026-04-17 00:02:23.022951 | orchestrator | + volume_retype_policy = "never" 2026-04-17 00:02:23.022955 | orchestrator | + volume_type = "ssd" 2026-04-17 00:02:23.022959 | orchestrator | } 2026-04-17 00:02:23.022963 | orchestrator | 2026-04-17 00:02:23.022966 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-17 00:02:23.022970 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-17 00:02:23.022974 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.022978 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.022981 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.022985 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.022989 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.022993 | orchestrator | + config_drive = true 2026-04-17 00:02:23.022999 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023003 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023007 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-17 00:02:23.023011 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023015 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023018 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023022 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023026 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023030 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023033 | orchestrator | + name = "testbed-manager" 2026-04-17 00:02:23.023037 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023041 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023045 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023049 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023052 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023056 | orchestrator | + user_data = (sensitive value) 2026-04-17 00:02:23.023060 | orchestrator | 2026-04-17 00:02:23.023064 | orchestrator | + block_device { 2026-04-17 00:02:23.023068 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023072 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023075 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023079 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023083 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023086 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023094 | orchestrator | } 2026-04-17 00:02:23.023098 | orchestrator | 2026-04-17 00:02:23.023102 | orchestrator | + network { 2026-04-17 00:02:23.023106 | orchestrator | + access_network = false 2026-04-17 00:02:23.023109 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023113 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.023117 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.023121 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.023124 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.023128 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023132 | orchestrator | } 2026-04-17 00:02:23.023136 | orchestrator | } 2026-04-17 00:02:23.023139 | orchestrator | 2026-04-17 00:02:23.023143 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-17 00:02:23.023147 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.023151 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.023155 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.023158 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.023162 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.023166 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.023170 | orchestrator | + config_drive = true 2026-04-17 00:02:23.023173 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023180 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023184 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.023187 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023191 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023195 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023199 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023203 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023206 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023210 | orchestrator | + name = "testbed-node-0" 2026-04-17 00:02:23.023214 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023218 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023221 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023225 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023229 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023233 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.023236 | orchestrator | 2026-04-17 00:02:23.023240 | orchestrator | + block_device { 2026-04-17 00:02:23.023244 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023248 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023252 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023255 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023259 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023263 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023267 | orchestrator | } 2026-04-17 00:02:23.023271 | orchestrator | 2026-04-17 00:02:23.023274 | orchestrator | + network { 2026-04-17 00:02:23.023278 | orchestrator | + access_network = false 2026-04-17 00:02:23.023282 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023286 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.023289 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.023293 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.023297 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.023301 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023304 | orchestrator | } 2026-04-17 00:02:23.023308 | orchestrator | } 2026-04-17 00:02:23.023312 | orchestrator | 2026-04-17 00:02:23.023316 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-17 00:02:23.023320 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.023323 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.023330 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.023334 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.023338 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.023341 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.023345 | orchestrator | + config_drive = true 2026-04-17 00:02:23.023349 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023353 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023356 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.023360 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023364 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023368 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023371 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023375 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023379 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023383 | orchestrator | + name = "testbed-node-1" 2026-04-17 00:02:23.023386 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023390 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023394 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023398 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023401 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023408 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.023412 | orchestrator | 2026-04-17 00:02:23.023416 | orchestrator | + block_device { 2026-04-17 00:02:23.023419 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023423 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023427 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023431 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023434 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023438 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023442 | orchestrator | } 2026-04-17 00:02:23.023446 | orchestrator | 2026-04-17 00:02:23.023450 | orchestrator | + network { 2026-04-17 00:02:23.023453 | orchestrator | + access_network = false 2026-04-17 00:02:23.023457 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023461 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.023465 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.023468 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.023472 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.023476 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023480 | orchestrator | } 2026-04-17 00:02:23.023484 | orchestrator | } 2026-04-17 00:02:23.023487 | orchestrator | 2026-04-17 00:02:23.023491 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-17 00:02:23.023495 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.023499 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.023502 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.023506 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.023510 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.023514 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.023518 | orchestrator | + config_drive = true 2026-04-17 00:02:23.023522 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023525 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023529 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.023533 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023537 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023540 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023544 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023551 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023555 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023559 | orchestrator | + name = "testbed-node-2" 2026-04-17 00:02:23.023563 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023569 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023573 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023576 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023580 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023584 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.023588 | orchestrator | 2026-04-17 00:02:23.023592 | orchestrator | + block_device { 2026-04-17 00:02:23.023596 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023599 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023603 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023607 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023611 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023614 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023618 | orchestrator | } 2026-04-17 00:02:23.023622 | orchestrator | 2026-04-17 00:02:23.023626 | orchestrator | + network { 2026-04-17 00:02:23.023630 | orchestrator | + access_network = false 2026-04-17 00:02:23.023633 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023637 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.023641 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.023645 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.023649 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.023652 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023656 | orchestrator | } 2026-04-17 00:02:23.023660 | orchestrator | } 2026-04-17 00:02:23.023664 | orchestrator | 2026-04-17 00:02:23.023670 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-17 00:02:23.023674 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.023678 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.023682 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.023696 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.023700 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.023704 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.023707 | orchestrator | + config_drive = true 2026-04-17 00:02:23.023711 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023715 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023719 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.023723 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023726 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023730 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023734 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023738 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023741 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023745 | orchestrator | + name = "testbed-node-3" 2026-04-17 00:02:23.023749 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023753 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023756 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023760 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023764 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023768 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.023772 | orchestrator | 2026-04-17 00:02:23.023775 | orchestrator | + block_device { 2026-04-17 00:02:23.023779 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023783 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023787 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023794 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023797 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023801 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023805 | orchestrator | } 2026-04-17 00:02:23.023809 | orchestrator | 2026-04-17 00:02:23.023812 | orchestrator | + network { 2026-04-17 00:02:23.023816 | orchestrator | + access_network = false 2026-04-17 00:02:23.023820 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023824 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.023828 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.023831 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.023835 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.023839 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023843 | orchestrator | } 2026-04-17 00:02:23.023846 | orchestrator | } 2026-04-17 00:02:23.023850 | orchestrator | 2026-04-17 00:02:23.023854 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-17 00:02:23.023858 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.023862 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.023866 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.023869 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.023873 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.023877 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.023881 | orchestrator | + config_drive = true 2026-04-17 00:02:23.023884 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.023888 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.023892 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.023896 | orchestrator | + force_delete = false 2026-04-17 00:02:23.023899 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.023903 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.023907 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.023911 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.023914 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.023918 | orchestrator | + name = "testbed-node-4" 2026-04-17 00:02:23.023922 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.023926 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.023930 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.023933 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.023937 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.023941 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.023945 | orchestrator | 2026-04-17 00:02:23.023949 | orchestrator | + block_device { 2026-04-17 00:02:23.023952 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.023956 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.023960 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.023964 | orchestrator | + multiattach = false 2026-04-17 00:02:23.023970 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.023974 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.023977 | orchestrator | } 2026-04-17 00:02:23.023981 | orchestrator | 2026-04-17 00:02:23.023985 | orchestrator | + network { 2026-04-17 00:02:23.023989 | orchestrator | + access_network = false 2026-04-17 00:02:23.023993 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.023996 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.024000 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.024004 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.024008 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.024011 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.024015 | orchestrator | } 2026-04-17 00:02:23.024019 | orchestrator | } 2026-04-17 00:02:23.024026 | orchestrator | 2026-04-17 00:02:23.024030 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-17 00:02:23.024034 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-17 00:02:23.024037 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-17 00:02:23.024041 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-17 00:02:23.024045 | orchestrator | + all_metadata = (known after apply) 2026-04-17 00:02:23.024049 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.024052 | orchestrator | + availability_zone = "nova" 2026-04-17 00:02:23.024056 | orchestrator | + config_drive = true 2026-04-17 00:02:23.024060 | orchestrator | + created = (known after apply) 2026-04-17 00:02:23.024064 | orchestrator | + flavor_id = (known after apply) 2026-04-17 00:02:23.024067 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-17 00:02:23.024071 | orchestrator | + force_delete = false 2026-04-17 00:02:23.024075 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-17 00:02:23.024079 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024082 | orchestrator | + image_id = (known after apply) 2026-04-17 00:02:23.024086 | orchestrator | + image_name = (known after apply) 2026-04-17 00:02:23.024090 | orchestrator | + key_pair = "testbed" 2026-04-17 00:02:23.024094 | orchestrator | + name = "testbed-node-5" 2026-04-17 00:02:23.024097 | orchestrator | + power_state = "active" 2026-04-17 00:02:23.024101 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024105 | orchestrator | + security_groups = (known after apply) 2026-04-17 00:02:23.024109 | orchestrator | + stop_before_destroy = false 2026-04-17 00:02:23.024112 | orchestrator | + updated = (known after apply) 2026-04-17 00:02:23.024116 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-17 00:02:23.024120 | orchestrator | 2026-04-17 00:02:23.024124 | orchestrator | + block_device { 2026-04-17 00:02:23.024128 | orchestrator | + boot_index = 0 2026-04-17 00:02:23.024132 | orchestrator | + delete_on_termination = false 2026-04-17 00:02:23.024135 | orchestrator | + destination_type = "volume" 2026-04-17 00:02:23.024139 | orchestrator | + multiattach = false 2026-04-17 00:02:23.024143 | orchestrator | + source_type = "volume" 2026-04-17 00:02:23.024147 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.024150 | orchestrator | } 2026-04-17 00:02:23.024154 | orchestrator | 2026-04-17 00:02:23.024158 | orchestrator | + network { 2026-04-17 00:02:23.024162 | orchestrator | + access_network = false 2026-04-17 00:02:23.024166 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-17 00:02:23.024169 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-17 00:02:23.024173 | orchestrator | + mac = (known after apply) 2026-04-17 00:02:23.024177 | orchestrator | + name = (known after apply) 2026-04-17 00:02:23.024181 | orchestrator | + port = (known after apply) 2026-04-17 00:02:23.024184 | orchestrator | + uuid = (known after apply) 2026-04-17 00:02:23.024188 | orchestrator | } 2026-04-17 00:02:23.024192 | orchestrator | } 2026-04-17 00:02:23.024196 | orchestrator | 2026-04-17 00:02:23.024200 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-17 00:02:23.024204 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-17 00:02:23.024207 | orchestrator | + fingerprint = (known after apply) 2026-04-17 00:02:23.024211 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024215 | orchestrator | + name = "testbed" 2026-04-17 00:02:23.024219 | orchestrator | + private_key = (sensitive value) 2026-04-17 00:02:23.024223 | orchestrator | + public_key = (known after apply) 2026-04-17 00:02:23.024226 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024230 | orchestrator | + user_id = (known after apply) 2026-04-17 00:02:23.024234 | orchestrator | } 2026-04-17 00:02:23.024238 | orchestrator | 2026-04-17 00:02:23.024241 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-17 00:02:23.024245 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024252 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024256 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024260 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024263 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024270 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024274 | orchestrator | } 2026-04-17 00:02:23.024278 | orchestrator | 2026-04-17 00:02:23.024282 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-17 00:02:23.024286 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024289 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024293 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024297 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024301 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024304 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024308 | orchestrator | } 2026-04-17 00:02:23.024312 | orchestrator | 2026-04-17 00:02:23.024316 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-17 00:02:23.024320 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024323 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024327 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024331 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024335 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024338 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024342 | orchestrator | } 2026-04-17 00:02:23.024346 | orchestrator | 2026-04-17 00:02:23.024350 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-17 00:02:23.024354 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024357 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024361 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024370 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024374 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024377 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024381 | orchestrator | } 2026-04-17 00:02:23.024385 | orchestrator | 2026-04-17 00:02:23.024389 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-17 00:02:23.024393 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024396 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024400 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024404 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024408 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024412 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024415 | orchestrator | } 2026-04-17 00:02:23.024419 | orchestrator | 2026-04-17 00:02:23.024423 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-17 00:02:23.024427 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024431 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024434 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024438 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024442 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024446 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024449 | orchestrator | } 2026-04-17 00:02:23.024453 | orchestrator | 2026-04-17 00:02:23.024457 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-17 00:02:23.024461 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024464 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024468 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024472 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024476 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024482 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024486 | orchestrator | } 2026-04-17 00:02:23.024490 | orchestrator | 2026-04-17 00:02:23.024494 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-17 00:02:23.024498 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024501 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024505 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024509 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024513 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024516 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024520 | orchestrator | } 2026-04-17 00:02:23.024524 | orchestrator | 2026-04-17 00:02:23.024528 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-17 00:02:23.024532 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-17 00:02:23.024535 | orchestrator | + device = (known after apply) 2026-04-17 00:02:23.024539 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024543 | orchestrator | + instance_id = (known after apply) 2026-04-17 00:02:23.024547 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024550 | orchestrator | + volume_id = (known after apply) 2026-04-17 00:02:23.024554 | orchestrator | } 2026-04-17 00:02:23.024558 | orchestrator | 2026-04-17 00:02:23.024562 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-17 00:02:23.024566 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-17 00:02:23.024570 | orchestrator | + fixed_ip = (known after apply) 2026-04-17 00:02:23.024574 | orchestrator | + floating_ip = (known after apply) 2026-04-17 00:02:23.024578 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024581 | orchestrator | + port_id = (known after apply) 2026-04-17 00:02:23.024585 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024589 | orchestrator | } 2026-04-17 00:02:23.024593 | orchestrator | 2026-04-17 00:02:23.024596 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-17 00:02:23.024600 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-17 00:02:23.024604 | orchestrator | + address = (known after apply) 2026-04-17 00:02:23.024608 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.024614 | orchestrator | + dns_domain = (known after apply) 2026-04-17 00:02:23.024618 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.024622 | orchestrator | + fixed_ip = (known after apply) 2026-04-17 00:02:23.024626 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024629 | orchestrator | + pool = "public" 2026-04-17 00:02:23.024633 | orchestrator | + port_id = (known after apply) 2026-04-17 00:02:23.024637 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024641 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.024645 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.024648 | orchestrator | } 2026-04-17 00:02:23.024652 | orchestrator | 2026-04-17 00:02:23.024656 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-17 00:02:23.024660 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-17 00:02:23.024663 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.024667 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.024671 | orchestrator | + availability_zone_hints = [ 2026-04-17 00:02:23.024675 | orchestrator | + "nova", 2026-04-17 00:02:23.024679 | orchestrator | ] 2026-04-17 00:02:23.024682 | orchestrator | + dns_domain = (known after apply) 2026-04-17 00:02:23.024698 | orchestrator | + external = (known after apply) 2026-04-17 00:02:23.024702 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024706 | orchestrator | + mtu = (known after apply) 2026-04-17 00:02:23.024709 | orchestrator | + name = "net-testbed-management" 2026-04-17 00:02:23.024713 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.024720 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.024724 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024728 | orchestrator | + shared = (known after apply) 2026-04-17 00:02:23.024731 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.024735 | orchestrator | + transparent_vlan = (known after apply) 2026-04-17 00:02:23.024739 | orchestrator | 2026-04-17 00:02:23.024743 | orchestrator | + segments (known after apply) 2026-04-17 00:02:23.024747 | orchestrator | } 2026-04-17 00:02:23.024750 | orchestrator | 2026-04-17 00:02:23.024754 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-17 00:02:23.024758 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-17 00:02:23.024765 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.024769 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.024773 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.024776 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.024780 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.024784 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.024788 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.024791 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.024795 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024799 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.024803 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.024806 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.024810 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.024814 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024818 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.024821 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.024825 | orchestrator | 2026-04-17 00:02:23.024829 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.024833 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.024837 | orchestrator | } 2026-04-17 00:02:23.024840 | orchestrator | 2026-04-17 00:02:23.024844 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.024848 | orchestrator | 2026-04-17 00:02:23.024852 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.024856 | orchestrator | + ip_address = "192.168.16.5" 2026-04-17 00:02:23.024859 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.024863 | orchestrator | } 2026-04-17 00:02:23.024867 | orchestrator | } 2026-04-17 00:02:23.024871 | orchestrator | 2026-04-17 00:02:23.024874 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-17 00:02:23.024878 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.024882 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.024886 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.024890 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.024893 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.024897 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.024901 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.024905 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.024908 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.024912 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.024916 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.024920 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.024923 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.024927 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.024931 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.024940 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.024943 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.024947 | orchestrator | 2026-04-17 00:02:23.024951 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.024955 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.024959 | orchestrator | } 2026-04-17 00:02:23.024962 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.024966 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.024970 | orchestrator | } 2026-04-17 00:02:23.024974 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.024978 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.024981 | orchestrator | } 2026-04-17 00:02:23.024985 | orchestrator | 2026-04-17 00:02:23.024989 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.024993 | orchestrator | 2026-04-17 00:02:23.024996 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025000 | orchestrator | + ip_address = "192.168.16.10" 2026-04-17 00:02:23.025004 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025008 | orchestrator | } 2026-04-17 00:02:23.025011 | orchestrator | } 2026-04-17 00:02:23.025015 | orchestrator | 2026-04-17 00:02:23.025019 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-17 00:02:23.025023 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.025029 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025033 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.025037 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.025041 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025044 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.025048 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.025052 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.025056 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.025059 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025063 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.025067 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.025071 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.025074 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.025078 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025082 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.025086 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025089 | orchestrator | 2026-04-17 00:02:23.025093 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025097 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.025101 | orchestrator | } 2026-04-17 00:02:23.025104 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025108 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.025112 | orchestrator | } 2026-04-17 00:02:23.025116 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025119 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.025123 | orchestrator | } 2026-04-17 00:02:23.025127 | orchestrator | 2026-04-17 00:02:23.025131 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.025134 | orchestrator | 2026-04-17 00:02:23.025138 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025142 | orchestrator | + ip_address = "192.168.16.11" 2026-04-17 00:02:23.025146 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025150 | orchestrator | } 2026-04-17 00:02:23.025153 | orchestrator | } 2026-04-17 00:02:23.025157 | orchestrator | 2026-04-17 00:02:23.025161 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-17 00:02:23.025165 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.025169 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025175 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.025179 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.025183 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025190 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.025194 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.025198 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.025201 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.025205 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025209 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.025213 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.025216 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.025220 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.025224 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025228 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.025231 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025235 | orchestrator | 2026-04-17 00:02:23.025239 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025243 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.025247 | orchestrator | } 2026-04-17 00:02:23.025250 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025254 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.025258 | orchestrator | } 2026-04-17 00:02:23.025262 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025265 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.025269 | orchestrator | } 2026-04-17 00:02:23.025273 | orchestrator | 2026-04-17 00:02:23.025277 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.025281 | orchestrator | 2026-04-17 00:02:23.025284 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025288 | orchestrator | + ip_address = "192.168.16.12" 2026-04-17 00:02:23.025292 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025296 | orchestrator | } 2026-04-17 00:02:23.025299 | orchestrator | } 2026-04-17 00:02:23.025303 | orchestrator | 2026-04-17 00:02:23.025307 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-17 00:02:23.025311 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.025315 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025318 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.025322 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.025326 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025330 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.025333 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.025337 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.025341 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.025345 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025348 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.025352 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.025356 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.025360 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.025363 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025367 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.025371 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025375 | orchestrator | 2026-04-17 00:02:23.025378 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025382 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.025386 | orchestrator | } 2026-04-17 00:02:23.025390 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025394 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.025397 | orchestrator | } 2026-04-17 00:02:23.025401 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025405 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.025409 | orchestrator | } 2026-04-17 00:02:23.025412 | orchestrator | 2026-04-17 00:02:23.025419 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.025423 | orchestrator | 2026-04-17 00:02:23.025427 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025431 | orchestrator | + ip_address = "192.168.16.13" 2026-04-17 00:02:23.025434 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025438 | orchestrator | } 2026-04-17 00:02:23.025442 | orchestrator | } 2026-04-17 00:02:23.025446 | orchestrator | 2026-04-17 00:02:23.025450 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-17 00:02:23.025453 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.025457 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025461 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.025465 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.025468 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025472 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.025476 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.025480 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.025483 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.025489 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025493 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.025497 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.025501 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.025505 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.025508 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025512 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.025516 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025520 | orchestrator | 2026-04-17 00:02:23.025524 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025530 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.025534 | orchestrator | } 2026-04-17 00:02:23.025538 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025542 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.025546 | orchestrator | } 2026-04-17 00:02:23.025549 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025553 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.025557 | orchestrator | } 2026-04-17 00:02:23.025561 | orchestrator | 2026-04-17 00:02:23.025565 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.025568 | orchestrator | 2026-04-17 00:02:23.025572 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025576 | orchestrator | + ip_address = "192.168.16.14" 2026-04-17 00:02:23.025580 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025584 | orchestrator | } 2026-04-17 00:02:23.025587 | orchestrator | } 2026-04-17 00:02:23.025591 | orchestrator | 2026-04-17 00:02:23.025595 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-17 00:02:23.025601 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-17 00:02:23.025605 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025609 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-17 00:02:23.025612 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-17 00:02:23.025616 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025620 | orchestrator | + device_id = (known after apply) 2026-04-17 00:02:23.025624 | orchestrator | + device_owner = (known after apply) 2026-04-17 00:02:23.025627 | orchestrator | + dns_assignment = (known after apply) 2026-04-17 00:02:23.025631 | orchestrator | + dns_name = (known after apply) 2026-04-17 00:02:23.025635 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025639 | orchestrator | + mac_address = (known after apply) 2026-04-17 00:02:23.025642 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.025646 | orchestrator | + port_security_enabled = (known after apply) 2026-04-17 00:02:23.025650 | orchestrator | + qos_policy_id = (known after apply) 2026-04-17 00:02:23.025656 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025660 | orchestrator | + security_group_ids = (known after apply) 2026-04-17 00:02:23.025664 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025668 | orchestrator | 2026-04-17 00:02:23.025672 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025675 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-17 00:02:23.025679 | orchestrator | } 2026-04-17 00:02:23.025683 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025704 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-17 00:02:23.025708 | orchestrator | } 2026-04-17 00:02:23.025712 | orchestrator | + allowed_address_pairs { 2026-04-17 00:02:23.025716 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-17 00:02:23.025720 | orchestrator | } 2026-04-17 00:02:23.025723 | orchestrator | 2026-04-17 00:02:23.025727 | orchestrator | + binding (known after apply) 2026-04-17 00:02:23.025731 | orchestrator | 2026-04-17 00:02:23.025735 | orchestrator | + fixed_ip { 2026-04-17 00:02:23.025739 | orchestrator | + ip_address = "192.168.16.15" 2026-04-17 00:02:23.025743 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025746 | orchestrator | } 2026-04-17 00:02:23.025750 | orchestrator | } 2026-04-17 00:02:23.025754 | orchestrator | 2026-04-17 00:02:23.025758 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-17 00:02:23.025762 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-17 00:02:23.025765 | orchestrator | + force_destroy = false 2026-04-17 00:02:23.025769 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025773 | orchestrator | + port_id = (known after apply) 2026-04-17 00:02:23.025777 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025781 | orchestrator | + router_id = (known after apply) 2026-04-17 00:02:23.025784 | orchestrator | + subnet_id = (known after apply) 2026-04-17 00:02:23.025788 | orchestrator | } 2026-04-17 00:02:23.025792 | orchestrator | 2026-04-17 00:02:23.025796 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-17 00:02:23.025800 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-17 00:02:23.025804 | orchestrator | + admin_state_up = (known after apply) 2026-04-17 00:02:23.025807 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.025811 | orchestrator | + availability_zone_hints = [ 2026-04-17 00:02:23.025815 | orchestrator | + "nova", 2026-04-17 00:02:23.025819 | orchestrator | ] 2026-04-17 00:02:23.025822 | orchestrator | + distributed = (known after apply) 2026-04-17 00:02:23.025826 | orchestrator | + enable_snat = (known after apply) 2026-04-17 00:02:23.025830 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-17 00:02:23.025834 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-17 00:02:23.025838 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025841 | orchestrator | + name = "testbed" 2026-04-17 00:02:23.025845 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025849 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025853 | orchestrator | 2026-04-17 00:02:23.025857 | orchestrator | + external_fixed_ip (known after apply) 2026-04-17 00:02:23.025861 | orchestrator | } 2026-04-17 00:02:23.025864 | orchestrator | 2026-04-17 00:02:23.025868 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-17 00:02:23.025873 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-17 00:02:23.025877 | orchestrator | + description = "ssh" 2026-04-17 00:02:23.025880 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.025885 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.025889 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025892 | orchestrator | + port_range_max = 22 2026-04-17 00:02:23.025896 | orchestrator | + port_range_min = 22 2026-04-17 00:02:23.025900 | orchestrator | + protocol = "tcp" 2026-04-17 00:02:23.025904 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025910 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.025914 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.025918 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.025922 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.025926 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025929 | orchestrator | } 2026-04-17 00:02:23.025933 | orchestrator | 2026-04-17 00:02:23.025937 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-17 00:02:23.025941 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-17 00:02:23.025945 | orchestrator | + description = "wireguard" 2026-04-17 00:02:23.025948 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.025952 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.025956 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.025960 | orchestrator | + port_range_max = 51820 2026-04-17 00:02:23.025964 | orchestrator | + port_range_min = 51820 2026-04-17 00:02:23.025967 | orchestrator | + protocol = "udp" 2026-04-17 00:02:23.025971 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.025975 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.025979 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.025983 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.025986 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.025993 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.025997 | orchestrator | } 2026-04-17 00:02:23.026001 | orchestrator | 2026-04-17 00:02:23.026004 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-17 00:02:23.026008 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-17 00:02:23.026043 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026048 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026052 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026056 | orchestrator | + protocol = "tcp" 2026-04-17 00:02:23.026060 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026063 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026067 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026071 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-17 00:02:23.026075 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026079 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026082 | orchestrator | } 2026-04-17 00:02:23.026086 | orchestrator | 2026-04-17 00:02:23.026090 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-17 00:02:23.026094 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-17 00:02:23.026098 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026101 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026105 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026109 | orchestrator | + protocol = "udp" 2026-04-17 00:02:23.026112 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026116 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026120 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026124 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-17 00:02:23.026127 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026131 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026135 | orchestrator | } 2026-04-17 00:02:23.026139 | orchestrator | 2026-04-17 00:02:23.026142 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-17 00:02:23.026150 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-17 00:02:23.026154 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026157 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026161 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026165 | orchestrator | + protocol = "icmp" 2026-04-17 00:02:23.026169 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026173 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026177 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026180 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.026184 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026188 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026192 | orchestrator | } 2026-04-17 00:02:23.026195 | orchestrator | 2026-04-17 00:02:23.026199 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-17 00:02:23.026203 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-17 00:02:23.026207 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026211 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026215 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026218 | orchestrator | + protocol = "tcp" 2026-04-17 00:02:23.026222 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026226 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026230 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026233 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.026237 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026241 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026245 | orchestrator | } 2026-04-17 00:02:23.026248 | orchestrator | 2026-04-17 00:02:23.026252 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-17 00:02:23.026256 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-17 00:02:23.026260 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026263 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026267 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026272 | orchestrator | + protocol = "udp" 2026-04-17 00:02:23.026276 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026279 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026283 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026287 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.026291 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026295 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026298 | orchestrator | } 2026-04-17 00:02:23.026302 | orchestrator | 2026-04-17 00:02:23.026306 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-17 00:02:23.026310 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-17 00:02:23.026313 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026317 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026321 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026325 | orchestrator | + protocol = "icmp" 2026-04-17 00:02:23.026328 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026332 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026336 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026340 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.026343 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026347 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026354 | orchestrator | } 2026-04-17 00:02:23.026358 | orchestrator | 2026-04-17 00:02:23.026365 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-17 00:02:23.026369 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-17 00:02:23.026373 | orchestrator | + description = "vrrp" 2026-04-17 00:02:23.026377 | orchestrator | + direction = "ingress" 2026-04-17 00:02:23.026381 | orchestrator | + ethertype = "IPv4" 2026-04-17 00:02:23.026384 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026388 | orchestrator | + protocol = "112" 2026-04-17 00:02:23.026392 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026396 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-17 00:02:23.026399 | orchestrator | + remote_group_id = (known after apply) 2026-04-17 00:02:23.026403 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-17 00:02:23.026407 | orchestrator | + security_group_id = (known after apply) 2026-04-17 00:02:23.026411 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026414 | orchestrator | } 2026-04-17 00:02:23.026418 | orchestrator | 2026-04-17 00:02:23.026422 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-17 00:02:23.026426 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-17 00:02:23.026430 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.026433 | orchestrator | + description = "management security group" 2026-04-17 00:02:23.026437 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026441 | orchestrator | + name = "testbed-management" 2026-04-17 00:02:23.026445 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026448 | orchestrator | + stateful = (known after apply) 2026-04-17 00:02:23.026452 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026456 | orchestrator | } 2026-04-17 00:02:23.026460 | orchestrator | 2026-04-17 00:02:23.026463 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-17 00:02:23.026467 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-17 00:02:23.026471 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.026475 | orchestrator | + description = "node security group" 2026-04-17 00:02:23.026478 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026482 | orchestrator | + name = "testbed-node" 2026-04-17 00:02:23.026486 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026490 | orchestrator | + stateful = (known after apply) 2026-04-17 00:02:23.026493 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026497 | orchestrator | } 2026-04-17 00:02:23.026501 | orchestrator | 2026-04-17 00:02:23.026505 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-17 00:02:23.026508 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-17 00:02:23.026512 | orchestrator | + all_tags = (known after apply) 2026-04-17 00:02:23.026516 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-17 00:02:23.026520 | orchestrator | + dns_nameservers = [ 2026-04-17 00:02:23.026524 | orchestrator | + "8.8.8.8", 2026-04-17 00:02:23.026528 | orchestrator | + "9.9.9.9", 2026-04-17 00:02:23.026531 | orchestrator | ] 2026-04-17 00:02:23.026535 | orchestrator | + enable_dhcp = true 2026-04-17 00:02:23.026539 | orchestrator | + gateway_ip = (known after apply) 2026-04-17 00:02:23.026545 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026549 | orchestrator | + ip_version = 4 2026-04-17 00:02:23.026553 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-17 00:02:23.026557 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-17 00:02:23.026560 | orchestrator | + name = "subnet-testbed-management" 2026-04-17 00:02:23.026564 | orchestrator | + network_id = (known after apply) 2026-04-17 00:02:23.026568 | orchestrator | + no_gateway = false 2026-04-17 00:02:23.026572 | orchestrator | + region = (known after apply) 2026-04-17 00:02:23.026576 | orchestrator | + service_types = (known after apply) 2026-04-17 00:02:23.026582 | orchestrator | + tenant_id = (known after apply) 2026-04-17 00:02:23.026586 | orchestrator | 2026-04-17 00:02:23.026590 | orchestrator | + allocation_pool { 2026-04-17 00:02:23.026593 | orchestrator | + end = "192.168.31.250" 2026-04-17 00:02:23.026597 | orchestrator | + start = "192.168.31.200" 2026-04-17 00:02:23.026601 | orchestrator | } 2026-04-17 00:02:23.026605 | orchestrator | } 2026-04-17 00:02:23.026609 | orchestrator | 2026-04-17 00:02:23.026612 | orchestrator | # terraform_data.image will be created 2026-04-17 00:02:23.026616 | orchestrator | + resource "terraform_data" "image" { 2026-04-17 00:02:23.026620 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026624 | orchestrator | + input = "Ubuntu 24.04" 2026-04-17 00:02:23.026627 | orchestrator | + output = (known after apply) 2026-04-17 00:02:23.026631 | orchestrator | } 2026-04-17 00:02:23.026635 | orchestrator | 2026-04-17 00:02:23.026639 | orchestrator | # terraform_data.image_node will be created 2026-04-17 00:02:23.026642 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-17 00:02:23.026646 | orchestrator | + id = (known after apply) 2026-04-17 00:02:23.026650 | orchestrator | + input = "Ubuntu 24.04" 2026-04-17 00:02:23.026654 | orchestrator | + output = (known after apply) 2026-04-17 00:02:23.026657 | orchestrator | } 2026-04-17 00:02:23.026661 | orchestrator | 2026-04-17 00:02:23.026665 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-17 00:02:23.026669 | orchestrator | 2026-04-17 00:02:23.026672 | orchestrator | Changes to Outputs: 2026-04-17 00:02:23.026676 | orchestrator | + manager_address = (sensitive value) 2026-04-17 00:02:23.026680 | orchestrator | + private_key = (sensitive value) 2026-04-17 00:02:23.106094 | orchestrator | terraform_data.image: Creating... 2026-04-17 00:02:23.220046 | orchestrator | terraform_data.image_node: Creating... 2026-04-17 00:02:23.220518 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=35fcd18c-1e37-7093-5f36-bd4fceb736c6] 2026-04-17 00:02:23.220966 | orchestrator | terraform_data.image: Creation complete after 0s [id=f8c7b628-1826-4c60-096a-c68c5dee1152] 2026-04-17 00:02:23.233364 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-17 00:02:23.238874 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-17 00:02:23.247883 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-17 00:02:23.248677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-17 00:02:23.248791 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-17 00:02:23.249622 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-17 00:02:23.249979 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-17 00:02:23.251341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-17 00:02:23.251453 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-17 00:02:23.251525 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-17 00:02:23.772634 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-17 00:02:23.777261 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-17 00:02:23.793480 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-17 00:02:23.799803 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-17 00:02:23.818468 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-17 00:02:23.823530 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-17 00:02:24.429278 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=71cfe2f9-b1a0-432d-a223-8fe749020658] 2026-04-17 00:02:24.437024 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-17 00:02:26.886598 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=bde58240-ae36-45ef-aa17-191037945ea9] 2026-04-17 00:02:26.897339 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-17 00:02:26.916149 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b] 2026-04-17 00:02:26.929980 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-17 00:02:26.944105 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=7da9734b-be35-484c-b986-e25152d7af20] 2026-04-17 00:02:26.951348 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-17 00:02:26.952292 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=cf4610dd-7a79-47aa-aaad-c27237a9a128] 2026-04-17 00:02:26.956257 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-17 00:02:26.974445 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=67bd38c1-9345-4e78-a265-9243ac6ca363] 2026-04-17 00:02:26.978563 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=fef13603-3987-4653-89a2-a4e711571ea7] 2026-04-17 00:02:26.979609 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-17 00:02:26.989589 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-17 00:02:26.992527 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=2308bb686ee8a751dd193e2ee3e1eea9bbd3c738] 2026-04-17 00:02:26.995960 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-17 00:02:27.061879 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06] 2026-04-17 00:02:27.076892 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-17 00:02:27.080250 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=0d637dae-6e45-402a-82ea-09e5e6b1641c] 2026-04-17 00:02:27.080788 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=e4ea3c0d8c7a07973a1a13dc3483f1b1d1931522] 2026-04-17 00:02:27.088899 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e49fa4cf-cf8d-4b96-9e62-961cb10cabfe] 2026-04-17 00:02:27.090569 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-17 00:02:27.834218 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=5dc0147e-a45a-45b5-8cf0-fcdb2e4c8c8c] 2026-04-17 00:02:28.159838 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=b44d53cf-5353-41a4-ab49-029c2a348a52] 2026-04-17 00:02:28.167141 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-17 00:02:30.357892 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=016aad87-77cf-4f84-939d-7c9b8b9ffacf] 2026-04-17 00:02:30.414909 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=0c01abd8-1868-4e11-b3b4-646d408eb2d6] 2026-04-17 00:02:30.443760 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=6a612a59-a293-42e1-94a9-7d2382f6f1f5] 2026-04-17 00:02:30.461995 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=18663bd2-cd83-4de0-86d3-64ee8af634cd] 2026-04-17 00:02:30.465354 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=935f922d-d22c-4ae3-8d21-594fc8e3804c] 2026-04-17 00:02:30.468866 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=e6cf65b8-cccd-43e0-af7e-f41bbf3c7356] 2026-04-17 00:02:31.081827 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=a646429d-bb0a-4040-9bc5-c14382e444d1] 2026-04-17 00:02:31.089220 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-17 00:02:31.089850 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-17 00:02:31.091278 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-17 00:02:31.350108 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b2b3f4c7-f118-4e24-965f-ddbda0de12a9] 2026-04-17 00:02:31.360126 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-17 00:02:31.362590 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-17 00:02:31.362635 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-17 00:02:31.363371 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-17 00:02:31.371553 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-17 00:02:31.375459 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-17 00:02:31.375499 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-17 00:02:31.377106 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-17 00:02:31.865108 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=1bf6bade-f4f0-4f35-9704-303b04dde8a5] 2026-04-17 00:02:31.879869 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-17 00:02:31.940781 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=500c6bb7-7882-4fcb-9aab-4ebd2ec70b15] 2026-04-17 00:02:32.129309 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-17 00:02:32.129371 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=32cafdb3-4be2-4cbb-8c42-a3cdbf2f47cd] 2026-04-17 00:02:32.129382 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-17 00:02:32.180809 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=6f1cf3a5-7394-4c2c-809d-7a568e051c83] 2026-04-17 00:02:32.191072 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-17 00:02:32.506201 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=2e14a76a-bb73-44fe-8ed2-a54be2ff07f7] 2026-04-17 00:02:32.510370 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-17 00:02:32.917080 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=141ff78f-5dd6-4a7c-9494-dc4f0b5c9ad3] 2026-04-17 00:02:32.925197 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-17 00:02:32.942465 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=397af21f-4297-4353-ba9a-5c6a997f2edf] 2026-04-17 00:02:32.947994 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-17 00:02:33.172817 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=9ab8466e-344c-48e0-aca7-4f01671285b7] 2026-04-17 00:02:33.179003 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-17 00:02:33.341055 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=24f90b5f-9df6-4285-bd15-acbe821e7402] 2026-04-17 00:02:33.370620 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=cf88e83d-8c15-4048-a04f-74d83e536281] 2026-04-17 00:02:33.422201 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=42c2da1b-03c7-4d49-a8e7-6aae520e6b88] 2026-04-17 00:02:33.535732 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 3s [id=5e1a1693-62ef-432e-9fe7-f610cf0ff917] 2026-04-17 00:02:33.830386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=93cbfbad-b7b5-4f7e-8ee8-8f90b2fc4adc] 2026-04-17 00:02:34.342195 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=44e8c8d3-f8e4-4d69-bd9b-fd912c86382f] 2026-04-17 00:02:34.498469 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=f515c78c-c86f-4160-a176-397fb6511f32] 2026-04-17 00:02:34.777172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=582dd305-15e4-43b2-bc7c-264f2ab1e567] 2026-04-17 00:02:35.183233 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=10652339-07be-4725-87bb-e07474c015e8] 2026-04-17 00:02:35.651674 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=a95ffbb8-5957-4c13-85da-140852a23729] 2026-04-17 00:02:35.669964 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-17 00:02:35.682226 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-17 00:02:35.682829 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-17 00:02:35.683781 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-17 00:02:35.697740 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-17 00:02:35.697896 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-17 00:02:35.718948 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-17 00:02:39.291405 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=3db56506-53ac-4bdb-b866-01e2d67256d1] 2026-04-17 00:02:39.302141 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-17 00:02:39.309236 | orchestrator | local_file.inventory: Creating... 2026-04-17 00:02:39.310946 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-17 00:02:39.319167 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1f4fed90a6302371aa9369396e13d234c2852995] 2026-04-17 00:02:39.320028 | orchestrator | local_file.inventory: Creation complete after 0s [id=7acc569e464a6fc267ebbb86adf37ff084cf9fd7] 2026-04-17 00:02:40.861885 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=3db56506-53ac-4bdb-b866-01e2d67256d1] 2026-04-17 00:02:45.685040 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-17 00:02:45.686315 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-17 00:02:45.686350 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-17 00:02:45.702748 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-17 00:02:45.702796 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-17 00:02:45.720085 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-17 00:02:55.694469 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-17 00:02:55.694597 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-17 00:02:55.694622 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-17 00:02:55.703849 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-17 00:02:55.703903 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-17 00:02:55.721267 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-17 00:03:05.703641 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-17 00:03:05.703787 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-17 00:03:05.703800 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-17 00:03:05.704951 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-17 00:03:05.705068 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-17 00:03:05.722500 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-17 00:03:06.667199 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=b9cf6d35-2f2e-40e2-894b-2def6bbe4aa6] 2026-04-17 00:03:06.712606 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=a9e0699c-af21-469e-846d-a051b3061da5] 2026-04-17 00:03:06.786531 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=3f63a1e0-9089-43bf-b4bf-d7d670d050a2] 2026-04-17 00:03:15.712418 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-17 00:03:15.712538 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-17 00:03:15.723012 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-17 00:03:17.085500 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=b390d4bf-c20d-4b44-ad26-aa443ccfe779] 2026-04-17 00:03:25.712935 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-17 00:03:25.713044 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-17 00:03:26.716960 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=6fe6cec8-1735-45a4-9d44-0576eba61c00] 2026-04-17 00:03:27.056970 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=38f11c8d-1b55-4e03-a3c2-ccae17b8d0f3] 2026-04-17 00:03:27.078236 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-17 00:03:27.088351 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=766510564883522167] 2026-04-17 00:03:27.099385 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-17 00:03:27.107480 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-17 00:03:27.108566 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-17 00:03:27.110230 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-17 00:03:27.116216 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-17 00:03:27.116785 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-17 00:03:27.119340 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-17 00:03:27.123686 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-17 00:03:27.133824 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-17 00:03:27.137133 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-17 00:03:30.505433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=38f11c8d-1b55-4e03-a3c2-ccae17b8d0f3/e49fa4cf-cf8d-4b96-9e62-961cb10cabfe] 2026-04-17 00:03:30.528462 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=3f63a1e0-9089-43bf-b4bf-d7d670d050a2/bde58240-ae36-45ef-aa17-191037945ea9] 2026-04-17 00:03:30.563967 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=b9cf6d35-2f2e-40e2-894b-2def6bbe4aa6/0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b] 2026-04-17 00:03:36.632511 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=38f11c8d-1b55-4e03-a3c2-ccae17b8d0f3/cf4610dd-7a79-47aa-aaad-c27237a9a128] 2026-04-17 00:03:36.661111 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=3f63a1e0-9089-43bf-b4bf-d7d670d050a2/0d637dae-6e45-402a-82ea-09e5e6b1641c] 2026-04-17 00:03:36.698337 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=b9cf6d35-2f2e-40e2-894b-2def6bbe4aa6/9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06] 2026-04-17 00:03:36.716726 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=38f11c8d-1b55-4e03-a3c2-ccae17b8d0f3/7da9734b-be35-484c-b986-e25152d7af20] 2026-04-17 00:03:36.746435 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=3f63a1e0-9089-43bf-b4bf-d7d670d050a2/fef13603-3987-4653-89a2-a4e711571ea7] 2026-04-17 00:03:36.763473 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=b9cf6d35-2f2e-40e2-894b-2def6bbe4aa6/67bd38c1-9345-4e78-a265-9243ac6ca363] 2026-04-17 00:03:37.134476 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-17 00:03:47.143724 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-17 00:03:47.916475 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=54b305c2-ff52-4b81-bfff-b5e508f7ca09] 2026-04-17 00:03:47.937546 | orchestrator | 2026-04-17 00:03:47.937672 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-17 00:03:47.937694 | orchestrator | 2026-04-17 00:03:47.937702 | orchestrator | Outputs: 2026-04-17 00:03:47.937709 | orchestrator | 2026-04-17 00:03:47.937716 | orchestrator | manager_address = 2026-04-17 00:03:47.937723 | orchestrator | private_key = 2026-04-17 00:03:48.223810 | orchestrator | ok: Runtime: 0:01:30.507816 2026-04-17 00:03:48.244788 | 2026-04-17 00:03:48.244959 | TASK [Create infrastructure (stable)] 2026-04-17 00:03:48.778961 | orchestrator | skipping: Conditional result was False 2026-04-17 00:03:48.798142 | 2026-04-17 00:03:48.798337 | TASK [Fetch manager address] 2026-04-17 00:03:49.301819 | orchestrator | ok 2026-04-17 00:03:49.311100 | 2026-04-17 00:03:49.311234 | TASK [Set manager_host address] 2026-04-17 00:03:49.391941 | orchestrator | ok 2026-04-17 00:03:49.402605 | 2026-04-17 00:03:49.402746 | LOOP [Update ansible collections] 2026-04-17 00:03:50.679553 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 00:03:50.679883 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-17 00:03:50.679932 | orchestrator | Starting galaxy collection install process 2026-04-17 00:03:50.679958 | orchestrator | Process install dependency map 2026-04-17 00:03:50.679981 | orchestrator | Starting collection install process 2026-04-17 00:03:50.680013 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-17 00:03:50.680040 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-17 00:03:50.680072 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-17 00:03:50.680130 | orchestrator | ok: Item: commons Runtime: 0:00:00.944902 2026-04-17 00:03:51.783115 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 00:03:51.783244 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-17 00:03:51.783278 | orchestrator | Starting galaxy collection install process 2026-04-17 00:03:51.783304 | orchestrator | Process install dependency map 2026-04-17 00:03:51.783328 | orchestrator | Starting collection install process 2026-04-17 00:03:51.783350 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-17 00:03:51.783372 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-17 00:03:51.783446 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-17 00:03:51.783487 | orchestrator | ok: Item: services Runtime: 0:00:00.790071 2026-04-17 00:03:51.801265 | 2026-04-17 00:03:51.801403 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-17 00:04:02.368730 | orchestrator | ok 2026-04-17 00:04:02.378580 | 2026-04-17 00:04:02.379018 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-17 00:05:02.428231 | orchestrator | ok 2026-04-17 00:05:02.439278 | 2026-04-17 00:05:02.439413 | TASK [Fetch manager ssh hostkey] 2026-04-17 00:05:04.012008 | orchestrator | Output suppressed because no_log was given 2026-04-17 00:05:04.028041 | 2026-04-17 00:05:04.028301 | TASK [Get ssh keypair from terraform environment] 2026-04-17 00:05:04.566093 | orchestrator | ok: Runtime: 0:00:00.009693 2026-04-17 00:05:04.585576 | 2026-04-17 00:05:04.585748 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-17 00:05:04.633141 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-17 00:05:04.643224 | 2026-04-17 00:05:04.643353 | TASK [Run manager part 0] 2026-04-17 00:05:05.611548 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 00:05:05.662851 | orchestrator | 2026-04-17 00:05:05.662930 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-17 00:05:05.662942 | orchestrator | 2026-04-17 00:05:05.662959 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-17 00:05:07.574593 | orchestrator | ok: [testbed-manager] 2026-04-17 00:05:07.574650 | orchestrator | 2026-04-17 00:05:07.574680 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-17 00:05:07.574693 | orchestrator | 2026-04-17 00:05:07.574704 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:05:09.344504 | orchestrator | ok: [testbed-manager] 2026-04-17 00:05:09.344609 | orchestrator | 2026-04-17 00:05:09.344622 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-17 00:05:10.034851 | orchestrator | ok: [testbed-manager] 2026-04-17 00:05:10.034920 | orchestrator | 2026-04-17 00:05:10.034930 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-17 00:05:10.082681 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:05:10.082768 | orchestrator | 2026-04-17 00:05:10.082779 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-17 00:05:10.114184 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:05:10.114265 | orchestrator | 2026-04-17 00:05:10.114274 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-17 00:05:10.145871 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:05:10.146167 | orchestrator | 2026-04-17 00:05:10.146242 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-17 00:05:10.864356 | orchestrator | changed: [testbed-manager] 2026-04-17 00:05:10.864409 | orchestrator | 2026-04-17 00:05:10.864418 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-17 00:08:11.134692 | orchestrator | changed: [testbed-manager] 2026-04-17 00:08:11.134815 | orchestrator | 2026-04-17 00:08:11.134846 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-17 00:11:01.850888 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:01.851006 | orchestrator | 2026-04-17 00:11:01.851028 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-17 00:11:21.694693 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:21.694811 | orchestrator | 2026-04-17 00:11:21.694832 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-17 00:11:31.132428 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:31.132533 | orchestrator | 2026-04-17 00:11:31.132549 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-17 00:11:31.187062 | orchestrator | ok: [testbed-manager] 2026-04-17 00:11:31.187145 | orchestrator | 2026-04-17 00:11:31.187155 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-17 00:11:31.950956 | orchestrator | ok: [testbed-manager] 2026-04-17 00:11:31.951010 | orchestrator | 2026-04-17 00:11:31.951018 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-17 00:11:32.673906 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:32.673989 | orchestrator | 2026-04-17 00:11:32.674013 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-17 00:11:38.923957 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:38.924028 | orchestrator | 2026-04-17 00:11:38.924042 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-17 00:11:44.884272 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:44.906137 | orchestrator | 2026-04-17 00:11:44.906190 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-17 00:11:47.555815 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:47.555858 | orchestrator | 2026-04-17 00:11:47.555867 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-17 00:11:49.249999 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:49.250680 | orchestrator | 2026-04-17 00:11:49.250697 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-17 00:11:50.310151 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-17 00:11:50.310217 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-17 00:11:50.310225 | orchestrator | 2026-04-17 00:11:50.310233 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-17 00:11:50.355477 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-17 00:11:50.355549 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-17 00:11:50.355562 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-17 00:11:50.355575 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-17 00:11:53.594834 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-17 00:11:53.594952 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-17 00:11:53.594975 | orchestrator | 2026-04-17 00:11:53.594988 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-17 00:11:54.159852 | orchestrator | changed: [testbed-manager] 2026-04-17 00:11:54.159938 | orchestrator | 2026-04-17 00:11:54.159955 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-17 00:13:16.499339 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-17 00:13:16.499398 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-17 00:13:16.499410 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-17 00:13:16.499419 | orchestrator | 2026-04-17 00:13:16.499430 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-17 00:13:18.818659 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-17 00:13:18.818756 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-17 00:13:18.818772 | orchestrator | 2026-04-17 00:13:18.818787 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-17 00:13:18.818799 | orchestrator | 2026-04-17 00:13:18.818810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:13:20.194349 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:20.194404 | orchestrator | 2026-04-17 00:13:20.194412 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-17 00:13:20.247287 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:20.247361 | orchestrator | 2026-04-17 00:13:20.247375 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-17 00:13:20.318499 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:20.318551 | orchestrator | 2026-04-17 00:13:20.318557 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-17 00:13:21.128904 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:21.129032 | orchestrator | 2026-04-17 00:13:21.129064 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-17 00:13:21.835141 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:21.835214 | orchestrator | 2026-04-17 00:13:21.835230 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-17 00:13:23.163768 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-17 00:13:23.163840 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-17 00:13:23.163854 | orchestrator | 2026-04-17 00:13:23.163867 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-17 00:13:24.513083 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:24.513277 | orchestrator | 2026-04-17 00:13:24.513305 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-17 00:13:26.213042 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:13:26.213112 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-17 00:13:26.213136 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:13:26.213143 | orchestrator | 2026-04-17 00:13:26.213151 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-17 00:13:26.266723 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:26.266778 | orchestrator | 2026-04-17 00:13:26.266785 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-17 00:13:26.339053 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:26.339109 | orchestrator | 2026-04-17 00:13:26.339115 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-17 00:13:26.860367 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:26.860473 | orchestrator | 2026-04-17 00:13:26.860487 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-17 00:13:26.937238 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:26.937298 | orchestrator | 2026-04-17 00:13:26.937307 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-17 00:13:27.740273 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:13:27.740359 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:27.740376 | orchestrator | 2026-04-17 00:13:27.740390 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-17 00:13:27.779730 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:27.779795 | orchestrator | 2026-04-17 00:13:27.779804 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-17 00:13:27.819837 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:27.819919 | orchestrator | 2026-04-17 00:13:27.819952 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-17 00:13:27.860721 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:27.860759 | orchestrator | 2026-04-17 00:13:27.860768 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-17 00:13:27.937025 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:27.937071 | orchestrator | 2026-04-17 00:13:27.937080 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-17 00:13:28.616778 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:28.616821 | orchestrator | 2026-04-17 00:13:28.616827 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-17 00:13:28.616832 | orchestrator | 2026-04-17 00:13:28.616837 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:13:29.969823 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:29.969856 | orchestrator | 2026-04-17 00:13:29.969862 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-17 00:13:30.875656 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:30.875692 | orchestrator | 2026-04-17 00:13:30.875698 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:13:30.875704 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-17 00:13:30.875708 | orchestrator | 2026-04-17 00:13:31.466529 | orchestrator | ok: Runtime: 0:08:26.018823 2026-04-17 00:13:31.484236 | 2026-04-17 00:13:31.484406 | TASK [Point out that the log in on the manager is now possible] 2026-04-17 00:13:31.520136 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-17 00:13:31.529888 | 2026-04-17 00:13:31.530024 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-17 00:13:31.571969 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-17 00:13:31.583067 | 2026-04-17 00:13:31.583314 | TASK [Run manager part 1 + 2] 2026-04-17 00:13:32.470198 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-17 00:13:32.523184 | orchestrator | 2026-04-17 00:13:32.523258 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-17 00:13:32.523276 | orchestrator | 2026-04-17 00:13:32.523308 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:13:35.449907 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:35.450012 | orchestrator | 2026-04-17 00:13:35.450089 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-17 00:13:35.488008 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:35.488077 | orchestrator | 2026-04-17 00:13:35.488096 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-17 00:13:35.531974 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:35.532033 | orchestrator | 2026-04-17 00:13:35.532048 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 00:13:35.576535 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:35.576576 | orchestrator | 2026-04-17 00:13:35.576584 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 00:13:35.644251 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:35.644291 | orchestrator | 2026-04-17 00:13:35.644298 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 00:13:35.705902 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:35.705984 | orchestrator | 2026-04-17 00:13:35.705999 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 00:13:35.754574 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-17 00:13:35.754607 | orchestrator | 2026-04-17 00:13:35.754613 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 00:13:36.406313 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:36.406352 | orchestrator | 2026-04-17 00:13:36.406361 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 00:13:36.450048 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:36.450085 | orchestrator | 2026-04-17 00:13:36.450090 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 00:13:37.753583 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:37.753632 | orchestrator | 2026-04-17 00:13:37.753641 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 00:13:38.342817 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:38.342869 | orchestrator | 2026-04-17 00:13:38.342879 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 00:13:39.449746 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:39.449835 | orchestrator | 2026-04-17 00:13:39.449854 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 00:13:54.885048 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:54.885124 | orchestrator | 2026-04-17 00:13:54.885142 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-17 00:13:55.575669 | orchestrator | ok: [testbed-manager] 2026-04-17 00:13:55.575734 | orchestrator | 2026-04-17 00:13:55.575754 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-17 00:13:55.659859 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:13:55.659895 | orchestrator | 2026-04-17 00:13:55.659914 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-17 00:13:56.557156 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:56.557193 | orchestrator | 2026-04-17 00:13:56.557199 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-17 00:13:57.508676 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:57.508733 | orchestrator | 2026-04-17 00:13:57.508746 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-17 00:13:58.096222 | orchestrator | changed: [testbed-manager] 2026-04-17 00:13:58.096262 | orchestrator | 2026-04-17 00:13:58.096268 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-17 00:13:58.144859 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-17 00:13:58.145156 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-17 00:13:58.145172 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-17 00:13:58.145177 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-17 00:14:00.222708 | orchestrator | changed: [testbed-manager] 2026-04-17 00:14:00.222862 | orchestrator | 2026-04-17 00:14:00.222876 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-17 00:14:09.008724 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-17 00:14:09.008772 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-17 00:14:09.008780 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-17 00:14:09.008785 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-17 00:14:09.008796 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-17 00:14:09.008801 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-17 00:14:09.008805 | orchestrator | 2026-04-17 00:14:09.008810 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-17 00:14:10.022143 | orchestrator | changed: [testbed-manager] 2026-04-17 00:14:10.022187 | orchestrator | 2026-04-17 00:14:10.022196 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-17 00:14:12.934657 | orchestrator | changed: [testbed-manager] 2026-04-17 00:14:12.934714 | orchestrator | 2026-04-17 00:14:12.934722 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-17 00:14:12.976306 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:14:12.976352 | orchestrator | 2026-04-17 00:14:12.976361 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-17 00:15:43.417174 | orchestrator | changed: [testbed-manager] 2026-04-17 00:15:43.417250 | orchestrator | 2026-04-17 00:15:43.417266 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 00:15:44.536031 | orchestrator | ok: [testbed-manager] 2026-04-17 00:15:44.536074 | orchestrator | 2026-04-17 00:15:44.536083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:15:44.536090 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-17 00:15:44.536095 | orchestrator | 2026-04-17 00:15:45.231651 | orchestrator | ok: Runtime: 0:02:12.792774 2026-04-17 00:15:45.248828 | 2026-04-17 00:15:45.248994 | TASK [Reboot manager] 2026-04-17 00:15:46.797356 | orchestrator | ok: Runtime: 0:00:00.937779 2026-04-17 00:15:46.814051 | 2026-04-17 00:15:46.814216 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-17 00:16:00.650734 | orchestrator | ok 2026-04-17 00:16:00.663994 | 2026-04-17 00:16:00.664124 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-17 00:17:00.709109 | orchestrator | ok 2026-04-17 00:17:00.718172 | 2026-04-17 00:17:00.718375 | TASK [Deploy manager + bootstrap nodes] 2026-04-17 00:17:03.110346 | orchestrator | 2026-04-17 00:17:03.110474 | orchestrator | # DEPLOY MANAGER 2026-04-17 00:17:03.110485 | orchestrator | 2026-04-17 00:17:03.110491 | orchestrator | + set -e 2026-04-17 00:17:03.110496 | orchestrator | + echo 2026-04-17 00:17:03.110503 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-17 00:17:03.110511 | orchestrator | + echo 2026-04-17 00:17:03.110537 | orchestrator | + cat /opt/manager-vars.sh 2026-04-17 00:17:03.114364 | orchestrator | export NUMBER_OF_NODES=6 2026-04-17 00:17:03.114377 | orchestrator | 2026-04-17 00:17:03.114382 | orchestrator | export CEPH_VERSION=reef 2026-04-17 00:17:03.114386 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-17 00:17:03.114392 | orchestrator | export MANAGER_VERSION=latest 2026-04-17 00:17:03.114401 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-17 00:17:03.114405 | orchestrator | 2026-04-17 00:17:03.114412 | orchestrator | export ARA=false 2026-04-17 00:17:03.114416 | orchestrator | export DEPLOY_MODE=manager 2026-04-17 00:17:03.114423 | orchestrator | export TEMPEST=true 2026-04-17 00:17:03.114427 | orchestrator | export IS_ZUUL=true 2026-04-17 00:17:03.114431 | orchestrator | 2026-04-17 00:17:03.114438 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:17:03.114443 | orchestrator | export EXTERNAL_API=false 2026-04-17 00:17:03.114446 | orchestrator | 2026-04-17 00:17:03.114450 | orchestrator | export IMAGE_USER=ubuntu 2026-04-17 00:17:03.114456 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-17 00:17:03.114460 | orchestrator | 2026-04-17 00:17:03.114463 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-17 00:17:03.114618 | orchestrator | 2026-04-17 00:17:03.114624 | orchestrator | + echo 2026-04-17 00:17:03.114629 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 00:17:03.115373 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 00:17:03.115380 | orchestrator | ++ INTERACTIVE=false 2026-04-17 00:17:03.115403 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 00:17:03.115408 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 00:17:03.115600 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 00:17:03.115616 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 00:17:03.115621 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 00:17:03.115687 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 00:17:03.115693 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 00:17:03.115697 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 00:17:03.115701 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 00:17:03.115705 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 00:17:03.115708 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 00:17:03.115712 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 00:17:03.115721 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 00:17:03.115725 | orchestrator | ++ export ARA=false 2026-04-17 00:17:03.115729 | orchestrator | ++ ARA=false 2026-04-17 00:17:03.115750 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 00:17:03.115755 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 00:17:03.115759 | orchestrator | ++ export TEMPEST=true 2026-04-17 00:17:03.115763 | orchestrator | ++ TEMPEST=true 2026-04-17 00:17:03.115767 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 00:17:03.115770 | orchestrator | ++ IS_ZUUL=true 2026-04-17 00:17:03.115774 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:17:03.115778 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:17:03.115881 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 00:17:03.115887 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 00:17:03.115891 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 00:17:03.115895 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 00:17:03.115899 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 00:17:03.115903 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 00:17:03.115906 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 00:17:03.115910 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 00:17:03.115914 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-17 00:17:03.165741 | orchestrator | + docker version 2026-04-17 00:17:03.281226 | orchestrator | Client: Docker Engine - Community 2026-04-17 00:17:03.281289 | orchestrator | Version: 27.5.1 2026-04-17 00:17:03.281295 | orchestrator | API version: 1.47 2026-04-17 00:17:03.281301 | orchestrator | Go version: go1.22.11 2026-04-17 00:17:03.281305 | orchestrator | Git commit: 9f9e405 2026-04-17 00:17:03.281309 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-17 00:17:03.281314 | orchestrator | OS/Arch: linux/amd64 2026-04-17 00:17:03.281318 | orchestrator | Context: default 2026-04-17 00:17:03.281322 | orchestrator | 2026-04-17 00:17:03.281326 | orchestrator | Server: Docker Engine - Community 2026-04-17 00:17:03.281337 | orchestrator | Engine: 2026-04-17 00:17:03.281653 | orchestrator | Version: 27.5.1 2026-04-17 00:17:03.281660 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-17 00:17:03.281684 | orchestrator | Go version: go1.22.11 2026-04-17 00:17:03.281751 | orchestrator | Git commit: 4c9b3b0 2026-04-17 00:17:03.281757 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-17 00:17:03.281761 | orchestrator | OS/Arch: linux/amd64 2026-04-17 00:17:03.281765 | orchestrator | Experimental: false 2026-04-17 00:17:03.281769 | orchestrator | containerd: 2026-04-17 00:17:03.282032 | orchestrator | Version: v2.2.3 2026-04-17 00:17:03.282040 | orchestrator | GitCommit: 77c84241c7cbdd9b4eca2591793e3d4f4317c590 2026-04-17 00:17:03.282044 | orchestrator | runc: 2026-04-17 00:17:03.282048 | orchestrator | Version: 1.3.5 2026-04-17 00:17:03.282173 | orchestrator | GitCommit: v1.3.5-0-g488fc13e 2026-04-17 00:17:03.282179 | orchestrator | docker-init: 2026-04-17 00:17:03.282183 | orchestrator | Version: 0.19.0 2026-04-17 00:17:03.282188 | orchestrator | GitCommit: de40ad0 2026-04-17 00:17:03.285435 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-17 00:17:03.295208 | orchestrator | + set -e 2026-04-17 00:17:03.295248 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 00:17:03.295253 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 00:17:03.295258 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 00:17:03.295262 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 00:17:03.295267 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 00:17:03.295270 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 00:17:03.295275 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 00:17:03.295480 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 00:17:03.295487 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 00:17:03.295491 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 00:17:03.295495 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 00:17:03.295499 | orchestrator | ++ export ARA=false 2026-04-17 00:17:03.295503 | orchestrator | ++ ARA=false 2026-04-17 00:17:03.295507 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 00:17:03.295511 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 00:17:03.295592 | orchestrator | ++ export TEMPEST=true 2026-04-17 00:17:03.295597 | orchestrator | ++ TEMPEST=true 2026-04-17 00:17:03.295601 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 00:17:03.295605 | orchestrator | ++ IS_ZUUL=true 2026-04-17 00:17:03.295609 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:17:03.295613 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:17:03.295617 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 00:17:03.295621 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 00:17:03.295625 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 00:17:03.295629 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 00:17:03.295633 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 00:17:03.295808 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 00:17:03.295815 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 00:17:03.295819 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 00:17:03.295823 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 00:17:03.295827 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 00:17:03.295926 | orchestrator | ++ INTERACTIVE=false 2026-04-17 00:17:03.295932 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 00:17:03.295939 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 00:17:03.296272 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 00:17:03.296322 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:17:03.296328 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-17 00:17:03.302845 | orchestrator | + set -e 2026-04-17 00:17:03.302949 | orchestrator | + VERSION=reef 2026-04-17 00:17:03.304173 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-17 00:17:03.308717 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-17 00:17:03.308757 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-17 00:17:03.314298 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-17 00:17:03.320690 | orchestrator | + set -e 2026-04-17 00:17:03.320720 | orchestrator | + VERSION=2024.2 2026-04-17 00:17:03.321209 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-17 00:17:03.324810 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-17 00:17:03.324825 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-17 00:17:03.330136 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-17 00:17:03.331014 | orchestrator | ++ semver latest 7.0.0 2026-04-17 00:17:03.394378 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 00:17:03.394438 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:17:03.394445 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-17 00:17:03.394457 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:17:03.394462 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-17 00:17:03.401007 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-17 00:17:03.406008 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-17 00:17:03.496519 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 00:17:03.497537 | orchestrator | + source /opt/venv/bin/activate 2026-04-17 00:17:03.498607 | orchestrator | ++ deactivate nondestructive 2026-04-17 00:17:03.498725 | orchestrator | ++ '[' -n '' ']' 2026-04-17 00:17:03.498731 | orchestrator | ++ '[' -n '' ']' 2026-04-17 00:17:03.498772 | orchestrator | ++ hash -r 2026-04-17 00:17:03.498777 | orchestrator | ++ '[' -n '' ']' 2026-04-17 00:17:03.498781 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-17 00:17:03.498915 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-17 00:17:03.498921 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-17 00:17:03.499091 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-17 00:17:03.499142 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-17 00:17:03.499148 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-17 00:17:03.499152 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-17 00:17:03.499212 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 00:17:03.499264 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 00:17:03.499306 | orchestrator | ++ export PATH 2026-04-17 00:17:03.499363 | orchestrator | ++ '[' -n '' ']' 2026-04-17 00:17:03.499526 | orchestrator | ++ '[' -z '' ']' 2026-04-17 00:17:03.499532 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-17 00:17:03.499535 | orchestrator | ++ PS1='(venv) ' 2026-04-17 00:17:03.499575 | orchestrator | ++ export PS1 2026-04-17 00:17:03.499580 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-17 00:17:03.499634 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-17 00:17:03.499640 | orchestrator | ++ hash -r 2026-04-17 00:17:03.499834 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-17 00:17:04.621604 | orchestrator | 2026-04-17 00:17:04.621678 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-17 00:17:04.621685 | orchestrator | 2026-04-17 00:17:04.621689 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 00:17:05.184905 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:05.184982 | orchestrator | 2026-04-17 00:17:05.184989 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-17 00:17:06.125667 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:06.125774 | orchestrator | 2026-04-17 00:17:06.125820 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-17 00:17:06.125834 | orchestrator | 2026-04-17 00:17:06.125846 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:17:08.702599 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:08.702726 | orchestrator | 2026-04-17 00:17:08.702745 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-17 00:17:08.758671 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:08.758764 | orchestrator | 2026-04-17 00:17:08.758781 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-17 00:17:09.200897 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:09.201005 | orchestrator | 2026-04-17 00:17:09.201023 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-17 00:17:09.232683 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:09.232781 | orchestrator | 2026-04-17 00:17:09.232840 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-17 00:17:09.595159 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:09.595272 | orchestrator | 2026-04-17 00:17:09.595289 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-17 00:17:09.925240 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:09.925319 | orchestrator | 2026-04-17 00:17:09.925327 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-17 00:17:10.036464 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:10.036578 | orchestrator | 2026-04-17 00:17:10.036603 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-17 00:17:10.036622 | orchestrator | 2026-04-17 00:17:10.036641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:17:11.887238 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:11.887370 | orchestrator | 2026-04-17 00:17:11.887396 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-17 00:17:11.995237 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-17 00:17:11.995336 | orchestrator | 2026-04-17 00:17:11.995352 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-17 00:17:12.047962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-17 00:17:12.048057 | orchestrator | 2026-04-17 00:17:12.048071 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-17 00:17:13.128118 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-17 00:17:13.128195 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-17 00:17:13.128203 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-17 00:17:13.128208 | orchestrator | 2026-04-17 00:17:13.128216 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-17 00:17:14.912064 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-17 00:17:14.912166 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-17 00:17:14.912178 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-17 00:17:14.912186 | orchestrator | 2026-04-17 00:17:14.912194 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-17 00:17:15.600176 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:17:15.600278 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:15.600294 | orchestrator | 2026-04-17 00:17:15.600307 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-17 00:17:16.214671 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:17:16.214845 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:16.214867 | orchestrator | 2026-04-17 00:17:16.214880 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-17 00:17:16.274391 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:16.274476 | orchestrator | 2026-04-17 00:17:16.274489 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-17 00:17:16.637928 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:16.638108 | orchestrator | 2026-04-17 00:17:16.638123 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-17 00:17:16.706584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-17 00:17:16.706689 | orchestrator | 2026-04-17 00:17:16.706725 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-17 00:17:17.792039 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:17.792141 | orchestrator | 2026-04-17 00:17:17.792157 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-17 00:17:18.617431 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:18.617523 | orchestrator | 2026-04-17 00:17:18.617537 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-17 00:17:28.766182 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:28.766273 | orchestrator | 2026-04-17 00:17:28.766287 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-17 00:17:28.816338 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:28.816429 | orchestrator | 2026-04-17 00:17:28.816445 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-17 00:17:28.816459 | orchestrator | 2026-04-17 00:17:28.816471 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:17:30.632979 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:30.633084 | orchestrator | 2026-04-17 00:17:30.633100 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-17 00:17:30.747709 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-17 00:17:30.747811 | orchestrator | 2026-04-17 00:17:30.747820 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-17 00:17:30.807610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:17:30.807698 | orchestrator | 2026-04-17 00:17:30.807712 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-17 00:17:33.114225 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:33.114329 | orchestrator | 2026-04-17 00:17:33.114352 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-17 00:17:33.162292 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:33.162380 | orchestrator | 2026-04-17 00:17:33.162392 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-17 00:17:33.284185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-17 00:17:33.284278 | orchestrator | 2026-04-17 00:17:33.284293 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-17 00:17:36.043904 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-17 00:17:36.044005 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-17 00:17:36.044019 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-17 00:17:36.044032 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-17 00:17:36.044043 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-17 00:17:36.044054 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-17 00:17:36.044066 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-17 00:17:36.044077 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-17 00:17:36.044088 | orchestrator | 2026-04-17 00:17:36.044101 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-17 00:17:36.651926 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:36.652028 | orchestrator | 2026-04-17 00:17:36.652042 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-17 00:17:37.276429 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:37.276513 | orchestrator | 2026-04-17 00:17:37.276530 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-17 00:17:37.347112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-17 00:17:37.347212 | orchestrator | 2026-04-17 00:17:37.347229 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-17 00:17:38.540539 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-17 00:17:38.540641 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-17 00:17:38.540657 | orchestrator | 2026-04-17 00:17:38.540670 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-17 00:17:39.188978 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:39.189078 | orchestrator | 2026-04-17 00:17:39.189095 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-17 00:17:39.242576 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:39.242671 | orchestrator | 2026-04-17 00:17:39.242685 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-17 00:17:39.320212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-17 00:17:39.320330 | orchestrator | 2026-04-17 00:17:39.320349 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-17 00:17:39.931528 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:39.931631 | orchestrator | 2026-04-17 00:17:39.931648 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-17 00:17:39.990554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-17 00:17:39.990649 | orchestrator | 2026-04-17 00:17:39.990671 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-17 00:17:41.319694 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:17:41.319856 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:17:41.319874 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:41.319887 | orchestrator | 2026-04-17 00:17:41.319900 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-17 00:17:41.920370 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:41.920473 | orchestrator | 2026-04-17 00:17:41.920488 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-17 00:17:41.972067 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:41.972170 | orchestrator | 2026-04-17 00:17:41.972211 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-17 00:17:42.048207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-17 00:17:42.048300 | orchestrator | 2026-04-17 00:17:42.048313 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-17 00:17:42.573348 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:42.573449 | orchestrator | 2026-04-17 00:17:42.573466 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-17 00:17:42.963159 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:42.963253 | orchestrator | 2026-04-17 00:17:42.963268 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-17 00:17:44.175739 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-17 00:17:44.175876 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-17 00:17:44.175892 | orchestrator | 2026-04-17 00:17:44.175905 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-17 00:17:44.797232 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:44.797333 | orchestrator | 2026-04-17 00:17:44.797349 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-17 00:17:45.154473 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:45.154584 | orchestrator | 2026-04-17 00:17:45.154600 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-17 00:17:45.512570 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:45.512686 | orchestrator | 2026-04-17 00:17:45.512702 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-17 00:17:45.560591 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:45.560692 | orchestrator | 2026-04-17 00:17:45.560709 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-17 00:17:45.630829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-17 00:17:45.630916 | orchestrator | 2026-04-17 00:17:45.630925 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-17 00:17:45.663932 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:45.664035 | orchestrator | 2026-04-17 00:17:45.664057 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-17 00:17:47.643076 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-17 00:17:47.643178 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-17 00:17:47.643197 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-17 00:17:47.643211 | orchestrator | 2026-04-17 00:17:47.643225 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-17 00:17:48.383664 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:48.383881 | orchestrator | 2026-04-17 00:17:48.383903 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-17 00:17:49.178232 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:49.178338 | orchestrator | 2026-04-17 00:17:49.178355 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-17 00:17:49.883186 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:49.883304 | orchestrator | 2026-04-17 00:17:49.883319 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-17 00:17:49.949844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-17 00:17:49.949937 | orchestrator | 2026-04-17 00:17:49.949953 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-17 00:17:49.997419 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:49.997523 | orchestrator | 2026-04-17 00:17:49.997547 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-17 00:17:50.707734 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-17 00:17:50.707862 | orchestrator | 2026-04-17 00:17:50.707878 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-17 00:17:50.798339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-17 00:17:50.798420 | orchestrator | 2026-04-17 00:17:50.798432 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-17 00:17:51.603609 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:51.603712 | orchestrator | 2026-04-17 00:17:51.603729 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-17 00:17:52.230518 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:52.230618 | orchestrator | 2026-04-17 00:17:52.230636 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-17 00:17:52.290993 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:17:52.291089 | orchestrator | 2026-04-17 00:17:52.291112 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-17 00:17:52.341591 | orchestrator | ok: [testbed-manager] 2026-04-17 00:17:52.341682 | orchestrator | 2026-04-17 00:17:52.341696 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-17 00:17:53.209862 | orchestrator | changed: [testbed-manager] 2026-04-17 00:17:53.209960 | orchestrator | 2026-04-17 00:17:53.209977 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-17 00:19:03.247587 | orchestrator | changed: [testbed-manager] 2026-04-17 00:19:03.247702 | orchestrator | 2026-04-17 00:19:03.247721 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-17 00:19:04.192695 | orchestrator | ok: [testbed-manager] 2026-04-17 00:19:04.192849 | orchestrator | 2026-04-17 00:19:04.192868 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-17 00:19:04.256986 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:19:04.257074 | orchestrator | 2026-04-17 00:19:04.257090 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-17 00:19:06.425473 | orchestrator | changed: [testbed-manager] 2026-04-17 00:19:06.425573 | orchestrator | 2026-04-17 00:19:06.425588 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-17 00:19:06.523419 | orchestrator | ok: [testbed-manager] 2026-04-17 00:19:06.523508 | orchestrator | 2026-04-17 00:19:06.523521 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 00:19:06.523532 | orchestrator | 2026-04-17 00:19:06.523541 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-17 00:19:06.578286 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:19:06.578376 | orchestrator | 2026-04-17 00:19:06.578391 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-17 00:20:06.632039 | orchestrator | Pausing for 60 seconds 2026-04-17 00:20:06.632141 | orchestrator | changed: [testbed-manager] 2026-04-17 00:20:06.632158 | orchestrator | 2026-04-17 00:20:06.632171 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-17 00:20:09.547874 | orchestrator | changed: [testbed-manager] 2026-04-17 00:20:09.547987 | orchestrator | 2026-04-17 00:20:09.548005 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-17 00:20:50.889007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-17 00:20:50.889139 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-17 00:20:50.889151 | orchestrator | changed: [testbed-manager] 2026-04-17 00:20:50.889160 | orchestrator | 2026-04-17 00:20:50.889169 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-17 00:20:55.877587 | orchestrator | changed: [testbed-manager] 2026-04-17 00:20:55.877722 | orchestrator | 2026-04-17 00:20:55.877750 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-17 00:20:55.945750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-17 00:20:55.945872 | orchestrator | 2026-04-17 00:20:55.945884 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-17 00:20:55.945894 | orchestrator | 2026-04-17 00:20:55.945902 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-17 00:20:55.990925 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:20:55.991006 | orchestrator | 2026-04-17 00:20:55.991021 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-17 00:20:56.061364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-17 00:20:56.061460 | orchestrator | 2026-04-17 00:20:56.061474 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-17 00:20:56.664922 | orchestrator | changed: [testbed-manager] 2026-04-17 00:20:56.665033 | orchestrator | 2026-04-17 00:20:56.665051 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-17 00:20:59.619001 | orchestrator | ok: [testbed-manager] 2026-04-17 00:20:59.619105 | orchestrator | 2026-04-17 00:20:59.619121 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-17 00:20:59.692924 | orchestrator | ok: [testbed-manager] => { 2026-04-17 00:20:59.693038 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-17 00:20:59.693061 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-17 00:20:59.693082 | orchestrator | "Checking running containers against expected versions...", 2026-04-17 00:20:59.693100 | orchestrator | "", 2026-04-17 00:20:59.693118 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-17 00:20:59.693136 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-17 00:20:59.693154 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693172 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-17 00:20:59.693190 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693208 | orchestrator | "", 2026-04-17 00:20:59.693226 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-17 00:20:59.693245 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-17 00:20:59.693263 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693281 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-17 00:20:59.693299 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693317 | orchestrator | "", 2026-04-17 00:20:59.693336 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-17 00:20:59.693355 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-17 00:20:59.693372 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693391 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-17 00:20:59.693410 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693428 | orchestrator | "", 2026-04-17 00:20:59.693445 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-17 00:20:59.693462 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-17 00:20:59.693480 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693499 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-17 00:20:59.693517 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693534 | orchestrator | "", 2026-04-17 00:20:59.693551 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-17 00:20:59.693600 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-17 00:20:59.693618 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693635 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-17 00:20:59.693653 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693669 | orchestrator | "", 2026-04-17 00:20:59.693688 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-17 00:20:59.693706 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.693723 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693740 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.693758 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693806 | orchestrator | "", 2026-04-17 00:20:59.693825 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-17 00:20:59.693857 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 00:20:59.693874 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693891 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-17 00:20:59.693907 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.693924 | orchestrator | "", 2026-04-17 00:20:59.693940 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-17 00:20:59.693957 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 00:20:59.693974 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.693990 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-17 00:20:59.694006 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694213 | orchestrator | "", 2026-04-17 00:20:59.694238 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-17 00:20:59.694257 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-17 00:20:59.694275 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694299 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-17 00:20:59.694318 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694338 | orchestrator | "", 2026-04-17 00:20:59.694355 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-17 00:20:59.694374 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 00:20:59.694393 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694412 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-17 00:20:59.694429 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694448 | orchestrator | "", 2026-04-17 00:20:59.694466 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-17 00:20:59.694485 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694503 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694522 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694534 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694549 | orchestrator | "", 2026-04-17 00:20:59.694572 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-17 00:20:59.694600 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694617 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694635 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694652 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694669 | orchestrator | "", 2026-04-17 00:20:59.694686 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-17 00:20:59.694704 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694720 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694737 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694754 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694796 | orchestrator | "", 2026-04-17 00:20:59.694815 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-17 00:20:59.694834 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694852 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694889 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694901 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694912 | orchestrator | "", 2026-04-17 00:20:59.694923 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-17 00:20:59.694957 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694968 | orchestrator | " Enabled: true", 2026-04-17 00:20:59.694977 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-17 00:20:59.694987 | orchestrator | " Status: ✅ MATCH", 2026-04-17 00:20:59.694997 | orchestrator | "", 2026-04-17 00:20:59.695006 | orchestrator | "=== Summary ===", 2026-04-17 00:20:59.695016 | orchestrator | "Errors (version mismatches): 0", 2026-04-17 00:20:59.695026 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-17 00:20:59.695035 | orchestrator | "", 2026-04-17 00:20:59.695046 | orchestrator | "✅ All running containers match expected versions!" 2026-04-17 00:20:59.695056 | orchestrator | ] 2026-04-17 00:20:59.695067 | orchestrator | } 2026-04-17 00:20:59.695084 | orchestrator | 2026-04-17 00:20:59.695104 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-17 00:20:59.757306 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:20:59.757406 | orchestrator | 2026-04-17 00:20:59.757421 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:20:59.757435 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-17 00:20:59.757447 | orchestrator | 2026-04-17 00:20:59.861445 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-17 00:20:59.861540 | orchestrator | + deactivate 2026-04-17 00:20:59.861555 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-17 00:20:59.861568 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-17 00:20:59.861579 | orchestrator | + export PATH 2026-04-17 00:20:59.861590 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-17 00:20:59.861601 | orchestrator | + '[' -n '' ']' 2026-04-17 00:20:59.861612 | orchestrator | + hash -r 2026-04-17 00:20:59.861623 | orchestrator | + '[' -n '' ']' 2026-04-17 00:20:59.861634 | orchestrator | + unset VIRTUAL_ENV 2026-04-17 00:20:59.861645 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-17 00:20:59.861656 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-17 00:20:59.861666 | orchestrator | + unset -f deactivate 2026-04-17 00:20:59.861678 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-17 00:20:59.869102 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 00:20:59.869127 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-17 00:20:59.869138 | orchestrator | + local max_attempts=60 2026-04-17 00:20:59.869149 | orchestrator | + local name=ceph-ansible 2026-04-17 00:20:59.869160 | orchestrator | + local attempt_num=1 2026-04-17 00:20:59.870274 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:20:59.912429 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:20:59.912494 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-17 00:20:59.912507 | orchestrator | + local max_attempts=60 2026-04-17 00:20:59.912519 | orchestrator | + local name=kolla-ansible 2026-04-17 00:20:59.912530 | orchestrator | + local attempt_num=1 2026-04-17 00:20:59.912958 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-17 00:20:59.944550 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:20:59.944620 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-17 00:20:59.944632 | orchestrator | + local max_attempts=60 2026-04-17 00:20:59.944644 | orchestrator | + local name=osism-ansible 2026-04-17 00:20:59.944655 | orchestrator | + local attempt_num=1 2026-04-17 00:20:59.945838 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-17 00:20:59.979629 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:20:59.979704 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 00:20:59.979725 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-17 00:21:00.644445 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-17 00:21:00.792032 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-17 00:21:00.792157 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792175 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792187 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-17 00:21:00.792200 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-17 00:21:00.792229 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792241 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792252 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2026-04-17 00:21:00.792263 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792274 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-17 00:21:00.792284 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792295 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-17 00:21:00.792306 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792317 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-17 00:21:00.792328 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.792339 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-17 00:21:00.800539 | orchestrator | ++ semver latest 7.0.0 2026-04-17 00:21:00.831462 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 00:21:00.831547 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:21:00.831562 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-17 00:21:00.833657 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-17 00:21:13.175245 | orchestrator | 2026-04-17 00:21:13 | INFO  | Prepare task for execution of resolvconf. 2026-04-17 00:21:13.380283 | orchestrator | 2026-04-17 00:21:13 | INFO  | Task 7c179fdc-681f-469f-bdd8-8747bb705a66 (resolvconf) was prepared for execution. 2026-04-17 00:21:13.380406 | orchestrator | 2026-04-17 00:21:13 | INFO  | It takes a moment until task 7c179fdc-681f-469f-bdd8-8747bb705a66 (resolvconf) has been started and output is visible here. 2026-04-17 00:21:26.048167 | orchestrator | 2026-04-17 00:21:26.048272 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-17 00:21:26.048289 | orchestrator | 2026-04-17 00:21:26.048301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:21:26.048313 | orchestrator | Friday 17 April 2026 00:21:16 +0000 (0:00:00.170) 0:00:00.170 ********** 2026-04-17 00:21:26.048325 | orchestrator | ok: [testbed-manager] 2026-04-17 00:21:26.048337 | orchestrator | 2026-04-17 00:21:26.048349 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-17 00:21:26.048360 | orchestrator | Friday 17 April 2026 00:21:20 +0000 (0:00:03.782) 0:00:03.952 ********** 2026-04-17 00:21:26.048382 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:21:26.048395 | orchestrator | 2026-04-17 00:21:26.048406 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-17 00:21:26.048417 | orchestrator | Friday 17 April 2026 00:21:20 +0000 (0:00:00.060) 0:00:04.013 ********** 2026-04-17 00:21:26.048428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-17 00:21:26.048440 | orchestrator | 2026-04-17 00:21:26.048451 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-17 00:21:26.048462 | orchestrator | Friday 17 April 2026 00:21:20 +0000 (0:00:00.086) 0:00:04.099 ********** 2026-04-17 00:21:26.048474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:21:26.048485 | orchestrator | 2026-04-17 00:21:26.048496 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-17 00:21:26.048507 | orchestrator | Friday 17 April 2026 00:21:20 +0000 (0:00:00.090) 0:00:04.190 ********** 2026-04-17 00:21:26.048518 | orchestrator | ok: [testbed-manager] 2026-04-17 00:21:26.048529 | orchestrator | 2026-04-17 00:21:26.048540 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-17 00:21:26.048551 | orchestrator | Friday 17 April 2026 00:21:21 +0000 (0:00:01.137) 0:00:05.328 ********** 2026-04-17 00:21:26.048562 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:21:26.048573 | orchestrator | 2026-04-17 00:21:26.048584 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-17 00:21:26.048595 | orchestrator | Friday 17 April 2026 00:21:21 +0000 (0:00:00.057) 0:00:05.385 ********** 2026-04-17 00:21:26.048606 | orchestrator | ok: [testbed-manager] 2026-04-17 00:21:26.048617 | orchestrator | 2026-04-17 00:21:26.048628 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-17 00:21:26.048639 | orchestrator | Friday 17 April 2026 00:21:22 +0000 (0:00:00.556) 0:00:05.942 ********** 2026-04-17 00:21:26.048650 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:21:26.048661 | orchestrator | 2026-04-17 00:21:26.048672 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-17 00:21:26.048684 | orchestrator | Friday 17 April 2026 00:21:22 +0000 (0:00:00.079) 0:00:06.022 ********** 2026-04-17 00:21:26.048698 | orchestrator | changed: [testbed-manager] 2026-04-17 00:21:26.048710 | orchestrator | 2026-04-17 00:21:26.048723 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-17 00:21:26.048735 | orchestrator | Friday 17 April 2026 00:21:22 +0000 (0:00:00.585) 0:00:06.607 ********** 2026-04-17 00:21:26.048748 | orchestrator | changed: [testbed-manager] 2026-04-17 00:21:26.048802 | orchestrator | 2026-04-17 00:21:26.048815 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-17 00:21:26.048827 | orchestrator | Friday 17 April 2026 00:21:23 +0000 (0:00:01.047) 0:00:07.655 ********** 2026-04-17 00:21:26.048859 | orchestrator | ok: [testbed-manager] 2026-04-17 00:21:26.048872 | orchestrator | 2026-04-17 00:21:26.048884 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-17 00:21:26.048897 | orchestrator | Friday 17 April 2026 00:21:24 +0000 (0:00:00.885) 0:00:08.540 ********** 2026-04-17 00:21:26.048910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-17 00:21:26.048922 | orchestrator | 2026-04-17 00:21:26.048935 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-17 00:21:26.048948 | orchestrator | Friday 17 April 2026 00:21:24 +0000 (0:00:00.074) 0:00:08.615 ********** 2026-04-17 00:21:26.048961 | orchestrator | changed: [testbed-manager] 2026-04-17 00:21:26.048974 | orchestrator | 2026-04-17 00:21:26.048987 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:21:26.049000 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:21:26.049014 | orchestrator | 2026-04-17 00:21:26.049027 | orchestrator | 2026-04-17 00:21:26.049041 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:21:26.049053 | orchestrator | Friday 17 April 2026 00:21:25 +0000 (0:00:01.038) 0:00:09.653 ********** 2026-04-17 00:21:26.049064 | orchestrator | =============================================================================== 2026-04-17 00:21:26.049075 | orchestrator | Gathering Facts --------------------------------------------------------- 3.78s 2026-04-17 00:21:26.049086 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2026-04-17 00:21:26.049097 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2026-04-17 00:21:26.049108 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.04s 2026-04-17 00:21:26.049118 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.89s 2026-04-17 00:21:26.049135 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2026-04-17 00:21:26.049164 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-04-17 00:21:26.049176 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-17 00:21:26.049186 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-17 00:21:26.049197 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-17 00:21:26.049208 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-17 00:21:26.049219 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-17 00:21:26.049230 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-17 00:21:26.169940 | orchestrator | + osism apply sshconfig 2026-04-17 00:21:37.282830 | orchestrator | 2026-04-17 00:21:37 | INFO  | Prepare task for execution of sshconfig. 2026-04-17 00:21:37.351900 | orchestrator | 2026-04-17 00:21:37 | INFO  | Task d5c1edd0-fcc6-4450-965a-5ea94e6920f0 (sshconfig) was prepared for execution. 2026-04-17 00:21:37.352008 | orchestrator | 2026-04-17 00:21:37 | INFO  | It takes a moment until task d5c1edd0-fcc6-4450-965a-5ea94e6920f0 (sshconfig) has been started and output is visible here. 2026-04-17 00:21:47.479284 | orchestrator | 2026-04-17 00:21:47.479377 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-17 00:21:47.479388 | orchestrator | 2026-04-17 00:21:47.479396 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-17 00:21:47.479403 | orchestrator | Friday 17 April 2026 00:21:40 +0000 (0:00:00.171) 0:00:00.171 ********** 2026-04-17 00:21:47.479410 | orchestrator | ok: [testbed-manager] 2026-04-17 00:21:47.479418 | orchestrator | 2026-04-17 00:21:47.479443 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-17 00:21:47.479450 | orchestrator | Friday 17 April 2026 00:21:41 +0000 (0:00:00.925) 0:00:01.096 ********** 2026-04-17 00:21:47.479457 | orchestrator | changed: [testbed-manager] 2026-04-17 00:21:47.479464 | orchestrator | 2026-04-17 00:21:47.479470 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-17 00:21:47.479477 | orchestrator | Friday 17 April 2026 00:21:41 +0000 (0:00:00.481) 0:00:01.577 ********** 2026-04-17 00:21:47.479483 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-17 00:21:47.479490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-17 00:21:47.479496 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-17 00:21:47.479503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-17 00:21:47.479509 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-17 00:21:47.479515 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-17 00:21:47.479521 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-17 00:21:47.479527 | orchestrator | 2026-04-17 00:21:47.479534 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-17 00:21:47.479540 | orchestrator | Friday 17 April 2026 00:21:46 +0000 (0:00:05.052) 0:00:06.630 ********** 2026-04-17 00:21:47.479546 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:21:47.479552 | orchestrator | 2026-04-17 00:21:47.479559 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-17 00:21:47.479565 | orchestrator | Friday 17 April 2026 00:21:46 +0000 (0:00:00.082) 0:00:06.712 ********** 2026-04-17 00:21:47.479572 | orchestrator | changed: [testbed-manager] 2026-04-17 00:21:47.479578 | orchestrator | 2026-04-17 00:21:47.479585 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:21:47.479593 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:21:47.479599 | orchestrator | 2026-04-17 00:21:47.479606 | orchestrator | 2026-04-17 00:21:47.479612 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:21:47.479618 | orchestrator | Friday 17 April 2026 00:21:47 +0000 (0:00:00.507) 0:00:07.220 ********** 2026-04-17 00:21:47.479624 | orchestrator | =============================================================================== 2026-04-17 00:21:47.479630 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.05s 2026-04-17 00:21:47.479636 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.93s 2026-04-17 00:21:47.479643 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-04-17 00:21:47.479649 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-04-17 00:21:47.479655 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-04-17 00:21:47.652077 | orchestrator | + osism apply known-hosts 2026-04-17 00:21:58.904892 | orchestrator | 2026-04-17 00:21:58 | INFO  | Prepare task for execution of known-hosts. 2026-04-17 00:21:58.979292 | orchestrator | 2026-04-17 00:21:58 | INFO  | Task fcc1a90c-3bbd-4b97-993c-6d586a6557b9 (known-hosts) was prepared for execution. 2026-04-17 00:21:58.979390 | orchestrator | 2026-04-17 00:21:58 | INFO  | It takes a moment until task fcc1a90c-3bbd-4b97-993c-6d586a6557b9 (known-hosts) has been started and output is visible here. 2026-04-17 00:22:14.421734 | orchestrator | 2026-04-17 00:22:14.421949 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-17 00:22:14.421997 | orchestrator | 2026-04-17 00:22:14.422099 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-17 00:22:14.422130 | orchestrator | Friday 17 April 2026 00:22:02 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-04-17 00:22:14.422153 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-17 00:22:14.422206 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-17 00:22:14.422228 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-17 00:22:14.422245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-17 00:22:14.422264 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-17 00:22:14.422282 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-17 00:22:14.422301 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-17 00:22:14.422321 | orchestrator | 2026-04-17 00:22:14.422341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-17 00:22:14.422361 | orchestrator | Friday 17 April 2026 00:22:08 +0000 (0:00:06.323) 0:00:06.517 ********** 2026-04-17 00:22:14.422382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-17 00:22:14.422404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-17 00:22:14.422425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-17 00:22:14.422445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-17 00:22:14.422462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-17 00:22:14.422479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-17 00:22:14.422497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-17 00:22:14.422514 | orchestrator | 2026-04-17 00:22:14.422531 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.422549 | orchestrator | Friday 17 April 2026 00:22:08 +0000 (0:00:00.181) 0:00:06.698 ********** 2026-04-17 00:22:14.422570 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK9K+z6mdXXJUyaM5TMK6MPjbTYP8yIFym6ssECy1GSk+twbi/eq7UcJSxZTSFwCjhKBTl80/n+vzCBPPd6lgbKmNKg2VAkFVfOetTfpttarb2LuNnVbG+bR33uQvl/2aDrEKBn3WZ0/bnnXB9Ch4sU7XauADT2XiaBgnIl38zWba1HaTvTCwcCAVEWw4/A0sPFBTo2QCevAvpuTAOYlzyK1n8WsUl6NQPkW4vm5SJ5iQ63/GSWe4dY3jmtxEQw3MeCxoWKHzaM8UflX+fuUtBKiRLHfmgGOwPge8xJCJjaZNppPC7EGMl0c62rtAuh127qzuOpH8x0LRdwyruiOurgVPLW6nEZXERO1etZIdhJCi01Xxm/I8gZE3tFOohzSpQaR5EiQt5+1SJ4Ay9WK6rqajCOysmXAxrfxO/Hcru1JKBaaAUUoIioqKCxHJThpnc6BGpjYPsdwmme7V4zygvuIGTu7l1p8mCKWSOHg9xuJUV+geyPKlvlklFqDSYBkE=) 2026-04-17 00:22:14.422592 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO8kjBfsuGWlfhpa+LCisLXp8/+sxB6e19EtNKR+CSMZmU/nI1IWVDeyxkkY48KOd8IGvn7ITc5EAFD6BPY+oic=) 2026-04-17 00:22:14.422612 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKAg7a0ViYQXyW/OaCpkryoma1DTs3BIAiAGqPd9Cf4v) 2026-04-17 00:22:14.422630 | orchestrator | 2026-04-17 00:22:14.422648 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.422665 | orchestrator | Friday 17 April 2026 00:22:09 +0000 (0:00:01.270) 0:00:07.969 ********** 2026-04-17 00:22:14.422712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5RvffEVhHKlaccmliZoO0CGNkRlKpDpuc3Fqg7Dp0UiZU466B9W5jP+QoHOTDrH0YqEdP9vQ5bX8+48zZyn1zMql35EE5tEEVPxNaso2HMFlbxCH98BH8b6lihBiOSmr2b/qsnRPE1CDmi2hFaE3BT8/rOr/+4zrap/BFfz552R8+hqVgSnOSZ7OW0CSxK26jSZ3KBEYFd8WOvoPPajMHsB8/ZJShGPDXXJZOHv8fxdoHMdR/CW1huQFDk7Iu6LZC9sXxmiX+Wt+ZCK9gWfOg0Zzz1anuCEo061bTXwntjM/2y3HZ2Z0ZLkrLuajw4RA3fMWMUUVv15Y7Dmj4ANVnKrcRRyBD9hEh6/DhHbI2D1Zt7mm7QATfOPAJ8/X38r2PCYsjkTDyuauoi8QhKN+Pq7L2dYQf+/G3103qzJrdBe22xmyhswbhWJltivTeJgjF0Fl8QqGJBcRCKOvOtGPdkJEAKDEsHeNn47moU632EmcXQjCVixNbXpnGHUeMkRs=) 2026-04-17 00:22:14.422781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtR5gGjCO3OKP3KoBPISHEpdtgX+aqHxM08IMLEYOxZ9jN29tgX/wqz5Q5rIED1827UB787CU/sG6djT6ezRkA=) 2026-04-17 00:22:14.422804 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBeXRmikAglRMd2KqN+h6iFYaRyc92px637jCYcQsZlt) 2026-04-17 00:22:14.422823 | orchestrator | 2026-04-17 00:22:14.422931 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.422956 | orchestrator | Friday 17 April 2026 00:22:10 +0000 (0:00:01.041) 0:00:09.011 ********** 2026-04-17 00:22:14.422977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtp2zLtx8U7XLsQm2Vc2U2t2127M6WG1no2Px6qXm/J8EGWoF/HrSzYJoUnaoDsoujYIgkiTvGeIsDvpRBkoNdO1oNxD2wyiu1wBao3bxcW6Q8rMkqQVAPnrpnyxOyGx06isNHtdKZB40cdHZ0U2dHJyGXnfhH6kLjEvEyNPuBh2Ph7En/sc0WwN0zvNcuMs9I4l12Dc6ffK1gJ0TPsRbOyk7UXxx5bTqXSlVgCY65390NeF0EAY9boa1HNuCNaAHkUxCRMpCRnG4NnkiUIAJogcM7NVD0yLHlUWIuVpa2wRZMyHWzfiVreFkSiIW5Vkks07MAkbjgsppsd3+VPIASLRPUQ371AgUjpQyywLNJlcZMMsS8HygvSADMluK8q1SNxMvknT6HddPHJ1Rxd2J2wl9+goCgMadvcYGVNz45WGQLnswR+q4AjnPX/t3rnN/jSGjYofxrT2KDCH6vKoZA0NlLbTt2qiElNCA9+Zqw+hdnaJ16RtfuBN8cyWUE/Hk=) 2026-04-17 00:22:14.422997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOEuPbbd3w08GsVxE3mVHVzW2UkraNzp8RIqdStHrMJ/gF8H1gkPHy8CzdzXgBo7ZrudrEfLClVgIGBlu9PgWs=) 2026-04-17 00:22:14.423016 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHCbCBoa+qigpcVcFDsk/1XvDEHmQ8nRw1ndrqcjWkcp) 2026-04-17 00:22:14.423036 | orchestrator | 2026-04-17 00:22:14.423056 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.423075 | orchestrator | Friday 17 April 2026 00:22:12 +0000 (0:00:01.036) 0:00:10.047 ********** 2026-04-17 00:22:14.423096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDyqTdfwi0A4ZUjaRjP06Su4+nQv2icrJYKZ1P/AE2+9pJKLPI41WrHakZnwgCJQbR0T7GyTKKBi9JchjTIOkuoECg1erEl+foVQkS7CLUFqpPF1JczaLYRqWk1Z8LaZ/jeTsbgd4jChvFtfW6hHKBWQmx2mEdzDuFjmx0qiRjQOS9llwsYBgxy6WQ/ASr82bGUUkRw8/GqQVWTTNxplXTrmqPtwtYtU0Iwul7n95iTTk6fy2YjzvKiCYa0LAI7WqU3YZnagIKFeYqMRCCewDIQvvvrkzyAmslmJbf/og1Vj5kdS6mrYnjXz/dFuKIN6j23sxUuZSLs2KezP6Dku4QhymEkRnhw1OvGCZX3n6y390LaiKocf0V5FoB26sTcaf7XCt3lMxr1xHdVpaPKg2WAXzA1KRvCBBbfmLGlpX4iD2yhiBbFFpSRcRhlJmfRxCe/bkZCkXvlI4rFPh+7HBSQqGQtQEtzxdfZ7SNHRaBCRdyPNwSAdQS2ok+myyZKeU=) 2026-04-17 00:22:14.423117 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBET29mBFv94bJO+rKQsAp2hTAyEy0QKY00TfptEcUYI147C7wYbDjUdJAzu9L790T3T+9GCIHH7Tqvn50YebIgw=) 2026-04-17 00:22:14.423136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIItnGPZu8LhMFpCMDvtAYDupR2R+f25GzM4ypQHoGIRb) 2026-04-17 00:22:14.423156 | orchestrator | 2026-04-17 00:22:14.423176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.423196 | orchestrator | Friday 17 April 2026 00:22:13 +0000 (0:00:00.992) 0:00:11.040 ********** 2026-04-17 00:22:14.423216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGpfU8RH9p2r+CPGtll9ujR/yw6ArfMLQWeHSxipjjh) 2026-04-17 00:22:14.423235 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDt4dxak5C4XZIn4skwLIgUZQYkMdbyUUoMeHXVmmcV3ZT/pqfL5EEsfp/VKY8fYlRQT+mBp8kgqxYCoj7rheUxKzmOefMQiLLVlASbLbQkR/00w4XsG+OQ8Papx7GrSRxvEZzD8AC9LS8nYIPgRLD+QUhmLWZUBjoD9Z3oLuhc73tN4FUwvIDWPjQQrqcXuUd3Lu5heu/2/rHRltiHTs2ThrfsuxUp2rPqDGRxMaimqH6qdJDdfWHeLUMpUME5uiqxTukWn/ceg0RtrLPGR1nnITEq7lY61Gu3BlJFvfePmqQQjRotgRL0khSmKhV/xQQdPQjSmrjlVyKgEOdk0AVH8YWl1aj7A598XoHWg2UnmRwtRmDHzk8TXemZNJU//OYauU5CJKCPBvcAhMOsnyWxjTqUKmxDQlMvdBHS1o9YKOHS215vNaMC+0c1fA8SRV75aSSdKaVIwbrohiHBf5nk3y9zTrvRL5/h/eMuiJTcrqV4GJEtkQR0KLwC8HzPm3c=) 2026-04-17 00:22:14.423270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF0Op5bhhfsr9apvjSIUhS2dRhSTVJ3Q2h6o4Tsfdq3MXxzIoQ8SeYnF2ZPczXxGOY7X6gHvpqgYFTrqwmGq/x0=) 2026-04-17 00:22:14.423289 | orchestrator | 2026-04-17 00:22:14.423309 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:14.423328 | orchestrator | Friday 17 April 2026 00:22:14 +0000 (0:00:01.028) 0:00:12.069 ********** 2026-04-17 00:22:14.423363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJAVlrCG3FYpupA63/VZ3wCEaBDuJzkfHVgq40cb+Ksdv2Gn0K9oxOnQK8gTgTU+UroLbdSUQ0rh/76sg636c5E=) 2026-04-17 00:22:25.217076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILmYYK4vusYGvCHhhF76EL8Xnx7N5PQJQ6KOi9cdA5fZ) 2026-04-17 00:22:25.217187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpTJUhZEEmI8t9yP+NLXvUBvodXCF6+iHtFwavPvNc3o3KnR5fIl2GaZMemhHbuNWKTZhOGeG8gr5PyWjv/wFp3dLWtYODaQiRfNtMyc0f3568jkxazHn93Hn7brDbv87XvWEMFco4vNGLdUZfwRdbh2AEMGZHhIqq64GGRoQFR5OeiKomzw60mm9/yaPPXwR37BsaKTZm1R+Y7ZMQZWYVlY0vUhCGAamAmd6gFF+yuYmrWYKUH2ES+GlIjDgzt07WCL6BYe91dTJc34xQeRnnipFXqnrjh0nn6I5Fs4x8meg2Zil8cTVTx+kxQEB6VyL6Lamki2uhry7myc5FsPgTIuQDGdZcK8h/aW0YW2qK/em37/9BStOBvJljcsGUpkZJnG1LmzYIEVkTVUtgw6cXzPPPUsRnldSgoCgwlfC97rmDTJrNATiMhvdEh1nI5yRQCek9MbJvt9zh/yDUHaOUrLD2jmNooI2B9u1o2LpxYeDjQyDxmRLJeBJa/A4HBV0=) 2026-04-17 00:22:25.217206 | orchestrator | 2026-04-17 00:22:25.217219 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:25.217231 | orchestrator | Friday 17 April 2026 00:22:15 +0000 (0:00:01.012) 0:00:13.082 ********** 2026-04-17 00:22:25.217243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL4uIUTjM6fJZNqPhsvI2Ii3rF/ZJ+kQtXMbUbx8dVwE88uj9FVe9I15A4P4zaaz9lBMHkNXuRW/w5XVlo1hzB8=) 2026-04-17 00:22:25.217282 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRMldZPgpWL1PB+5F+Xx46dzb6vPZoGTeyBPOTlqhTmbt1KpTo1pvKwVkFnVitbPJnfrk0jB/UJOptHCyCxT0as23/wCnVRuPA2NQVuHRCA5LL66GbfEOfoLR3CBdbzOQYjAA0kLDGR+CG9Wc22Ctk4mQvmZWVdzU+RQ+Rj47tiiNj4q393jmDhgqA5zvh9Z/H4FnVQdHh92TSwva/SwjZZNhUghAzlWVAB1UOFePd/YHACx3PjxXQrXMDxntknW821yMIUO99moE12LRCyU1OFg+YZIqc+uPB0dX4SHzFjU1Ji/75FLbgWrjS+uesg2azPqHJYsk4RNsr4m5Lsoz+wUuPYIyKy5PYcmlQdkVu5YVzHg4zJAkiIRPgJmKjpeFrZNr6lFvnbR2vlf8KOxoyWf8rhAr8qaqa0oPyapaOQXR1jwhYUPScAWCycHCuBBq3yLtfZW8WGlpMWbY7EXoL8Yo/014lnsBmEza9jvKfnMVrEBkI+NsX1wfUQeP9scs=) 2026-04-17 00:22:25.217294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYoYtSUEMV5AxqNOK14/isjJT04Syj/7Gt2b2GH25uB) 2026-04-17 00:22:25.217305 | orchestrator | 2026-04-17 00:22:25.217317 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-17 00:22:25.217329 | orchestrator | Friday 17 April 2026 00:22:16 +0000 (0:00:01.014) 0:00:14.097 ********** 2026-04-17 00:22:25.217340 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-17 00:22:25.217352 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-17 00:22:25.217363 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-17 00:22:25.217375 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-17 00:22:25.217386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-17 00:22:25.217417 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-17 00:22:25.217429 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-17 00:22:25.217439 | orchestrator | 2026-04-17 00:22:25.217451 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-17 00:22:25.217462 | orchestrator | Friday 17 April 2026 00:22:21 +0000 (0:00:05.214) 0:00:19.311 ********** 2026-04-17 00:22:25.217474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-17 00:22:25.217488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-17 00:22:25.217499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-17 00:22:25.217510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-17 00:22:25.217521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-17 00:22:25.217531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-17 00:22:25.217542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-17 00:22:25.217553 | orchestrator | 2026-04-17 00:22:25.217582 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:25.217594 | orchestrator | Friday 17 April 2026 00:22:21 +0000 (0:00:00.178) 0:00:19.489 ********** 2026-04-17 00:22:25.217606 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKAg7a0ViYQXyW/OaCpkryoma1DTs3BIAiAGqPd9Cf4v) 2026-04-17 00:22:25.217621 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK9K+z6mdXXJUyaM5TMK6MPjbTYP8yIFym6ssECy1GSk+twbi/eq7UcJSxZTSFwCjhKBTl80/n+vzCBPPd6lgbKmNKg2VAkFVfOetTfpttarb2LuNnVbG+bR33uQvl/2aDrEKBn3WZ0/bnnXB9Ch4sU7XauADT2XiaBgnIl38zWba1HaTvTCwcCAVEWw4/A0sPFBTo2QCevAvpuTAOYlzyK1n8WsUl6NQPkW4vm5SJ5iQ63/GSWe4dY3jmtxEQw3MeCxoWKHzaM8UflX+fuUtBKiRLHfmgGOwPge8xJCJjaZNppPC7EGMl0c62rtAuh127qzuOpH8x0LRdwyruiOurgVPLW6nEZXERO1etZIdhJCi01Xxm/I8gZE3tFOohzSpQaR5EiQt5+1SJ4Ay9WK6rqajCOysmXAxrfxO/Hcru1JKBaaAUUoIioqKCxHJThpnc6BGpjYPsdwmme7V4zygvuIGTu7l1p8mCKWSOHg9xuJUV+geyPKlvlklFqDSYBkE=) 2026-04-17 00:22:25.217634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO8kjBfsuGWlfhpa+LCisLXp8/+sxB6e19EtNKR+CSMZmU/nI1IWVDeyxkkY48KOd8IGvn7ITc5EAFD6BPY+oic=) 2026-04-17 00:22:25.217646 | orchestrator | 2026-04-17 00:22:25.217659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:25.217671 | orchestrator | Friday 17 April 2026 00:22:22 +0000 (0:00:01.016) 0:00:20.506 ********** 2026-04-17 00:22:25.217684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBeXRmikAglRMd2KqN+h6iFYaRyc92px637jCYcQsZlt) 2026-04-17 00:22:25.217696 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5RvffEVhHKlaccmliZoO0CGNkRlKpDpuc3Fqg7Dp0UiZU466B9W5jP+QoHOTDrH0YqEdP9vQ5bX8+48zZyn1zMql35EE5tEEVPxNaso2HMFlbxCH98BH8b6lihBiOSmr2b/qsnRPE1CDmi2hFaE3BT8/rOr/+4zrap/BFfz552R8+hqVgSnOSZ7OW0CSxK26jSZ3KBEYFd8WOvoPPajMHsB8/ZJShGPDXXJZOHv8fxdoHMdR/CW1huQFDk7Iu6LZC9sXxmiX+Wt+ZCK9gWfOg0Zzz1anuCEo061bTXwntjM/2y3HZ2Z0ZLkrLuajw4RA3fMWMUUVv15Y7Dmj4ANVnKrcRRyBD9hEh6/DhHbI2D1Zt7mm7QATfOPAJ8/X38r2PCYsjkTDyuauoi8QhKN+Pq7L2dYQf+/G3103qzJrdBe22xmyhswbhWJltivTeJgjF0Fl8QqGJBcRCKOvOtGPdkJEAKDEsHeNn47moU632EmcXQjCVixNbXpnGHUeMkRs=) 2026-04-17 00:22:25.217718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtR5gGjCO3OKP3KoBPISHEpdtgX+aqHxM08IMLEYOxZ9jN29tgX/wqz5Q5rIED1827UB787CU/sG6djT6ezRkA=) 2026-04-17 00:22:25.217730 | orchestrator | 2026-04-17 00:22:25.217776 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:25.217795 | orchestrator | Friday 17 April 2026 00:22:23 +0000 (0:00:01.031) 0:00:21.538 ********** 2026-04-17 00:22:25.217813 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOEuPbbd3w08GsVxE3mVHVzW2UkraNzp8RIqdStHrMJ/gF8H1gkPHy8CzdzXgBo7ZrudrEfLClVgIGBlu9PgWs=) 2026-04-17 00:22:25.217835 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtp2zLtx8U7XLsQm2Vc2U2t2127M6WG1no2Px6qXm/J8EGWoF/HrSzYJoUnaoDsoujYIgkiTvGeIsDvpRBkoNdO1oNxD2wyiu1wBao3bxcW6Q8rMkqQVAPnrpnyxOyGx06isNHtdKZB40cdHZ0U2dHJyGXnfhH6kLjEvEyNPuBh2Ph7En/sc0WwN0zvNcuMs9I4l12Dc6ffK1gJ0TPsRbOyk7UXxx5bTqXSlVgCY65390NeF0EAY9boa1HNuCNaAHkUxCRMpCRnG4NnkiUIAJogcM7NVD0yLHlUWIuVpa2wRZMyHWzfiVreFkSiIW5Vkks07MAkbjgsppsd3+VPIASLRPUQ371AgUjpQyywLNJlcZMMsS8HygvSADMluK8q1SNxMvknT6HddPHJ1Rxd2J2wl9+goCgMadvcYGVNz45WGQLnswR+q4AjnPX/t3rnN/jSGjYofxrT2KDCH6vKoZA0NlLbTt2qiElNCA9+Zqw+hdnaJ16RtfuBN8cyWUE/Hk=) 2026-04-17 00:22:25.217866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHCbCBoa+qigpcVcFDsk/1XvDEHmQ8nRw1ndrqcjWkcp) 2026-04-17 00:22:25.217886 | orchestrator | 2026-04-17 00:22:25.217907 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:25.217926 | orchestrator | Friday 17 April 2026 00:22:24 +0000 (0:00:00.999) 0:00:22.538 ********** 2026-04-17 00:22:25.217946 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIItnGPZu8LhMFpCMDvtAYDupR2R+f25GzM4ypQHoGIRb) 2026-04-17 00:22:25.217977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDyqTdfwi0A4ZUjaRjP06Su4+nQv2icrJYKZ1P/AE2+9pJKLPI41WrHakZnwgCJQbR0T7GyTKKBi9JchjTIOkuoECg1erEl+foVQkS7CLUFqpPF1JczaLYRqWk1Z8LaZ/jeTsbgd4jChvFtfW6hHKBWQmx2mEdzDuFjmx0qiRjQOS9llwsYBgxy6WQ/ASr82bGUUkRw8/GqQVWTTNxplXTrmqPtwtYtU0Iwul7n95iTTk6fy2YjzvKiCYa0LAI7WqU3YZnagIKFeYqMRCCewDIQvvvrkzyAmslmJbf/og1Vj5kdS6mrYnjXz/dFuKIN6j23sxUuZSLs2KezP6Dku4QhymEkRnhw1OvGCZX3n6y390LaiKocf0V5FoB26sTcaf7XCt3lMxr1xHdVpaPKg2WAXzA1KRvCBBbfmLGlpX4iD2yhiBbFFpSRcRhlJmfRxCe/bkZCkXvlI4rFPh+7HBSQqGQtQEtzxdfZ7SNHRaBCRdyPNwSAdQS2ok+myyZKeU=) 2026-04-17 00:22:29.639647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBET29mBFv94bJO+rKQsAp2hTAyEy0QKY00TfptEcUYI147C7wYbDjUdJAzu9L790T3T+9GCIHH7Tqvn50YebIgw=) 2026-04-17 00:22:29.639810 | orchestrator | 2026-04-17 00:22:29.639830 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:29.639843 | orchestrator | Friday 17 April 2026 00:22:25 +0000 (0:00:01.060) 0:00:23.598 ********** 2026-04-17 00:22:29.639857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDt4dxak5C4XZIn4skwLIgUZQYkMdbyUUoMeHXVmmcV3ZT/pqfL5EEsfp/VKY8fYlRQT+mBp8kgqxYCoj7rheUxKzmOefMQiLLVlASbLbQkR/00w4XsG+OQ8Papx7GrSRxvEZzD8AC9LS8nYIPgRLD+QUhmLWZUBjoD9Z3oLuhc73tN4FUwvIDWPjQQrqcXuUd3Lu5heu/2/rHRltiHTs2ThrfsuxUp2rPqDGRxMaimqH6qdJDdfWHeLUMpUME5uiqxTukWn/ceg0RtrLPGR1nnITEq7lY61Gu3BlJFvfePmqQQjRotgRL0khSmKhV/xQQdPQjSmrjlVyKgEOdk0AVH8YWl1aj7A598XoHWg2UnmRwtRmDHzk8TXemZNJU//OYauU5CJKCPBvcAhMOsnyWxjTqUKmxDQlMvdBHS1o9YKOHS215vNaMC+0c1fA8SRV75aSSdKaVIwbrohiHBf5nk3y9zTrvRL5/h/eMuiJTcrqV4GJEtkQR0KLwC8HzPm3c=) 2026-04-17 00:22:29.639896 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF0Op5bhhfsr9apvjSIUhS2dRhSTVJ3Q2h6o4Tsfdq3MXxzIoQ8SeYnF2ZPczXxGOY7X6gHvpqgYFTrqwmGq/x0=) 2026-04-17 00:22:29.639909 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGpfU8RH9p2r+CPGtll9ujR/yw6ArfMLQWeHSxipjjh) 2026-04-17 00:22:29.639921 | orchestrator | 2026-04-17 00:22:29.639932 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:29.639944 | orchestrator | Friday 17 April 2026 00:22:26 +0000 (0:00:01.015) 0:00:24.613 ********** 2026-04-17 00:22:29.639954 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILmYYK4vusYGvCHhhF76EL8Xnx7N5PQJQ6KOi9cdA5fZ) 2026-04-17 00:22:29.639967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpTJUhZEEmI8t9yP+NLXvUBvodXCF6+iHtFwavPvNc3o3KnR5fIl2GaZMemhHbuNWKTZhOGeG8gr5PyWjv/wFp3dLWtYODaQiRfNtMyc0f3568jkxazHn93Hn7brDbv87XvWEMFco4vNGLdUZfwRdbh2AEMGZHhIqq64GGRoQFR5OeiKomzw60mm9/yaPPXwR37BsaKTZm1R+Y7ZMQZWYVlY0vUhCGAamAmd6gFF+yuYmrWYKUH2ES+GlIjDgzt07WCL6BYe91dTJc34xQeRnnipFXqnrjh0nn6I5Fs4x8meg2Zil8cTVTx+kxQEB6VyL6Lamki2uhry7myc5FsPgTIuQDGdZcK8h/aW0YW2qK/em37/9BStOBvJljcsGUpkZJnG1LmzYIEVkTVUtgw6cXzPPPUsRnldSgoCgwlfC97rmDTJrNATiMhvdEh1nI5yRQCek9MbJvt9zh/yDUHaOUrLD2jmNooI2B9u1o2LpxYeDjQyDxmRLJeBJa/A4HBV0=) 2026-04-17 00:22:29.639978 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJAVlrCG3FYpupA63/VZ3wCEaBDuJzkfHVgq40cb+Ksdv2Gn0K9oxOnQK8gTgTU+UroLbdSUQ0rh/76sg636c5E=) 2026-04-17 00:22:29.639989 | orchestrator | 2026-04-17 00:22:29.640000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-17 00:22:29.640011 | orchestrator | Friday 17 April 2026 00:22:27 +0000 (0:00:01.074) 0:00:25.688 ********** 2026-04-17 00:22:29.640028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRMldZPgpWL1PB+5F+Xx46dzb6vPZoGTeyBPOTlqhTmbt1KpTo1pvKwVkFnVitbPJnfrk0jB/UJOptHCyCxT0as23/wCnVRuPA2NQVuHRCA5LL66GbfEOfoLR3CBdbzOQYjAA0kLDGR+CG9Wc22Ctk4mQvmZWVdzU+RQ+Rj47tiiNj4q393jmDhgqA5zvh9Z/H4FnVQdHh92TSwva/SwjZZNhUghAzlWVAB1UOFePd/YHACx3PjxXQrXMDxntknW821yMIUO99moE12LRCyU1OFg+YZIqc+uPB0dX4SHzFjU1Ji/75FLbgWrjS+uesg2azPqHJYsk4RNsr4m5Lsoz+wUuPYIyKy5PYcmlQdkVu5YVzHg4zJAkiIRPgJmKjpeFrZNr6lFvnbR2vlf8KOxoyWf8rhAr8qaqa0oPyapaOQXR1jwhYUPScAWCycHCuBBq3yLtfZW8WGlpMWbY7EXoL8Yo/014lnsBmEza9jvKfnMVrEBkI+NsX1wfUQeP9scs=) 2026-04-17 00:22:29.640046 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL4uIUTjM6fJZNqPhsvI2Ii3rF/ZJ+kQtXMbUbx8dVwE88uj9FVe9I15A4P4zaaz9lBMHkNXuRW/w5XVlo1hzB8=) 2026-04-17 00:22:29.640064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOYoYtSUEMV5AxqNOK14/isjJT04Syj/7Gt2b2GH25uB) 2026-04-17 00:22:29.640084 | orchestrator | 2026-04-17 00:22:29.640102 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-17 00:22:29.640120 | orchestrator | Friday 17 April 2026 00:22:28 +0000 (0:00:01.061) 0:00:26.749 ********** 2026-04-17 00:22:29.640138 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-17 00:22:29.640156 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 00:22:29.640175 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-17 00:22:29.640194 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-17 00:22:29.640237 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-17 00:22:29.640257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-17 00:22:29.640277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-17 00:22:29.640296 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:22:29.640316 | orchestrator | 2026-04-17 00:22:29.640336 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-17 00:22:29.640371 | orchestrator | Friday 17 April 2026 00:22:28 +0000 (0:00:00.177) 0:00:26.927 ********** 2026-04-17 00:22:29.640382 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:22:29.640393 | orchestrator | 2026-04-17 00:22:29.640404 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-17 00:22:29.640418 | orchestrator | Friday 17 April 2026 00:22:28 +0000 (0:00:00.039) 0:00:26.967 ********** 2026-04-17 00:22:29.640436 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:22:29.640453 | orchestrator | 2026-04-17 00:22:29.640471 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-17 00:22:29.640489 | orchestrator | Friday 17 April 2026 00:22:28 +0000 (0:00:00.055) 0:00:27.023 ********** 2026-04-17 00:22:29.640507 | orchestrator | changed: [testbed-manager] 2026-04-17 00:22:29.640519 | orchestrator | 2026-04-17 00:22:29.640530 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:22:29.640541 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:22:29.640553 | orchestrator | 2026-04-17 00:22:29.640564 | orchestrator | 2026-04-17 00:22:29.640576 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:22:29.640595 | orchestrator | Friday 17 April 2026 00:22:29 +0000 (0:00:00.463) 0:00:27.486 ********** 2026-04-17 00:22:29.640612 | orchestrator | =============================================================================== 2026-04-17 00:22:29.640653 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.32s 2026-04-17 00:22:29.640671 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2026-04-17 00:22:29.640688 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2026-04-17 00:22:29.640700 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-17 00:22:29.640710 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 00:22:29.640721 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-17 00:22:29.640732 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-17 00:22:29.640774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-17 00:22:29.640786 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 00:22:29.640797 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-17 00:22:29.640807 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-17 00:22:29.640818 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-17 00:22:29.640835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-17 00:22:29.640851 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-17 00:22:29.640870 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-17 00:22:29.640888 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-17 00:22:29.640908 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2026-04-17 00:22:29.640926 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-17 00:22:29.640946 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-17 00:22:29.640965 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-17 00:22:29.807693 | orchestrator | + osism apply squid 2026-04-17 00:22:41.143107 | orchestrator | 2026-04-17 00:22:41 | INFO  | Prepare task for execution of squid. 2026-04-17 00:22:41.211280 | orchestrator | 2026-04-17 00:22:41 | INFO  | Task 5431fc18-95e3-482b-a409-eb0c27536790 (squid) was prepared for execution. 2026-04-17 00:22:41.211408 | orchestrator | 2026-04-17 00:22:41 | INFO  | It takes a moment until task 5431fc18-95e3-482b-a409-eb0c27536790 (squid) has been started and output is visible here. 2026-04-17 00:24:44.253366 | orchestrator | 2026-04-17 00:24:44.253480 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-17 00:24:44.253499 | orchestrator | 2026-04-17 00:24:44.253512 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-17 00:24:44.253525 | orchestrator | Friday 17 April 2026 00:22:44 +0000 (0:00:00.174) 0:00:00.174 ********** 2026-04-17 00:24:44.253537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:24:44.253549 | orchestrator | 2026-04-17 00:24:44.253560 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-17 00:24:44.253571 | orchestrator | Friday 17 April 2026 00:22:44 +0000 (0:00:00.072) 0:00:00.247 ********** 2026-04-17 00:24:44.253583 | orchestrator | ok: [testbed-manager] 2026-04-17 00:24:44.253595 | orchestrator | 2026-04-17 00:24:44.253606 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-17 00:24:44.253617 | orchestrator | Friday 17 April 2026 00:22:46 +0000 (0:00:02.008) 0:00:02.255 ********** 2026-04-17 00:24:44.253628 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-17 00:24:44.253639 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-17 00:24:44.253650 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-17 00:24:44.253719 | orchestrator | 2026-04-17 00:24:44.253734 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-17 00:24:44.253745 | orchestrator | Friday 17 April 2026 00:22:47 +0000 (0:00:01.104) 0:00:03.360 ********** 2026-04-17 00:24:44.253756 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-17 00:24:44.253768 | orchestrator | 2026-04-17 00:24:44.253779 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-17 00:24:44.253790 | orchestrator | Friday 17 April 2026 00:22:48 +0000 (0:00:00.974) 0:00:04.334 ********** 2026-04-17 00:24:44.253802 | orchestrator | ok: [testbed-manager] 2026-04-17 00:24:44.253813 | orchestrator | 2026-04-17 00:24:44.253824 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-17 00:24:44.253835 | orchestrator | Friday 17 April 2026 00:22:48 +0000 (0:00:00.305) 0:00:04.639 ********** 2026-04-17 00:24:44.253846 | orchestrator | changed: [testbed-manager] 2026-04-17 00:24:44.253857 | orchestrator | 2026-04-17 00:24:44.253868 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-17 00:24:44.253879 | orchestrator | Friday 17 April 2026 00:22:49 +0000 (0:00:00.812) 0:00:05.452 ********** 2026-04-17 00:24:44.253890 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-17 00:24:44.253902 | orchestrator | ok: [testbed-manager] 2026-04-17 00:24:44.253916 | orchestrator | 2026-04-17 00:24:44.253928 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-17 00:24:44.253941 | orchestrator | Friday 17 April 2026 00:23:31 +0000 (0:00:42.179) 0:00:47.631 ********** 2026-04-17 00:24:44.253953 | orchestrator | changed: [testbed-manager] 2026-04-17 00:24:44.253965 | orchestrator | 2026-04-17 00:24:44.253978 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-17 00:24:44.253990 | orchestrator | Friday 17 April 2026 00:23:43 +0000 (0:00:11.786) 0:00:59.417 ********** 2026-04-17 00:24:44.254003 | orchestrator | Pausing for 60 seconds 2026-04-17 00:24:44.254080 | orchestrator | changed: [testbed-manager] 2026-04-17 00:24:44.254094 | orchestrator | 2026-04-17 00:24:44.254107 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-17 00:24:44.254120 | orchestrator | Friday 17 April 2026 00:24:43 +0000 (0:01:00.085) 0:01:59.503 ********** 2026-04-17 00:24:44.254133 | orchestrator | ok: [testbed-manager] 2026-04-17 00:24:44.254188 | orchestrator | 2026-04-17 00:24:44.254208 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-17 00:24:44.254226 | orchestrator | Friday 17 April 2026 00:24:43 +0000 (0:00:00.060) 0:01:59.564 ********** 2026-04-17 00:24:44.254245 | orchestrator | changed: [testbed-manager] 2026-04-17 00:24:44.254263 | orchestrator | 2026-04-17 00:24:44.254282 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:24:44.254299 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:24:44.254317 | orchestrator | 2026-04-17 00:24:44.254334 | orchestrator | 2026-04-17 00:24:44.254353 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:24:44.254370 | orchestrator | Friday 17 April 2026 00:24:44 +0000 (0:00:00.632) 0:02:00.196 ********** 2026-04-17 00:24:44.254389 | orchestrator | =============================================================================== 2026-04-17 00:24:44.254407 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-04-17 00:24:44.254427 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 42.18s 2026-04-17 00:24:44.254438 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.79s 2026-04-17 00:24:44.254449 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.01s 2026-04-17 00:24:44.254460 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2026-04-17 00:24:44.254471 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.97s 2026-04-17 00:24:44.254482 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-04-17 00:24:44.254493 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-04-17 00:24:44.254504 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-04-17 00:24:44.254514 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-17 00:24:44.254525 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-17 00:24:44.414202 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 00:24:44.414284 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-17 00:24:44.420868 | orchestrator | + set -e 2026-04-17 00:24:44.420944 | orchestrator | + NAMESPACE=kolla 2026-04-17 00:24:44.420969 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-17 00:24:44.423858 | orchestrator | ++ semver latest 9.0.0 2026-04-17 00:24:44.477097 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-17 00:24:44.477190 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 00:24:44.477784 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-17 00:24:55.831256 | orchestrator | 2026-04-17 00:24:55 | INFO  | Prepare task for execution of operator. 2026-04-17 00:24:55.906220 | orchestrator | 2026-04-17 00:24:55 | INFO  | Task d3dd05da-ffe6-412e-8464-edd9e1057f5e (operator) was prepared for execution. 2026-04-17 00:24:55.906312 | orchestrator | 2026-04-17 00:24:55 | INFO  | It takes a moment until task d3dd05da-ffe6-412e-8464-edd9e1057f5e (operator) has been started and output is visible here. 2026-04-17 00:25:10.172203 | orchestrator | 2026-04-17 00:25:10.172298 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-17 00:25:10.172314 | orchestrator | 2026-04-17 00:25:10.172326 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 00:25:10.172337 | orchestrator | Friday 17 April 2026 00:24:58 +0000 (0:00:00.135) 0:00:00.135 ********** 2026-04-17 00:25:10.172348 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:25:10.172360 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:25:10.172372 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:25:10.172398 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:25:10.172410 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:25:10.172421 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:25:10.172453 | orchestrator | 2026-04-17 00:25:10.172469 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-17 00:25:10.172480 | orchestrator | Friday 17 April 2026 00:25:02 +0000 (0:00:03.345) 0:00:03.480 ********** 2026-04-17 00:25:10.172491 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:25:10.172502 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:25:10.172513 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:25:10.172524 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:25:10.172534 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:25:10.172545 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:25:10.172555 | orchestrator | 2026-04-17 00:25:10.172566 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-17 00:25:10.172577 | orchestrator | 2026-04-17 00:25:10.172588 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-17 00:25:10.172599 | orchestrator | Friday 17 April 2026 00:25:02 +0000 (0:00:00.732) 0:00:04.213 ********** 2026-04-17 00:25:10.172610 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:25:10.172621 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:25:10.172631 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:25:10.172642 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:25:10.172683 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:25:10.172694 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:25:10.172704 | orchestrator | 2026-04-17 00:25:10.172715 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-17 00:25:10.172726 | orchestrator | Friday 17 April 2026 00:25:03 +0000 (0:00:00.117) 0:00:04.331 ********** 2026-04-17 00:25:10.172737 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:25:10.172748 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:25:10.172761 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:25:10.172779 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:25:10.172791 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:25:10.172804 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:25:10.172816 | orchestrator | 2026-04-17 00:25:10.172829 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-17 00:25:10.172841 | orchestrator | Friday 17 April 2026 00:25:03 +0000 (0:00:00.146) 0:00:04.478 ********** 2026-04-17 00:25:10.172853 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:10.172866 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:10.172878 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:10.172890 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:10.172903 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:10.172915 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:10.172927 | orchestrator | 2026-04-17 00:25:10.172940 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-17 00:25:10.172953 | orchestrator | Friday 17 April 2026 00:25:03 +0000 (0:00:00.611) 0:00:05.090 ********** 2026-04-17 00:25:10.172966 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:10.172978 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:10.172990 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:10.173003 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:10.173015 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:10.173027 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:10.173039 | orchestrator | 2026-04-17 00:25:10.173052 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-17 00:25:10.173064 | orchestrator | Friday 17 April 2026 00:25:04 +0000 (0:00:00.869) 0:00:05.959 ********** 2026-04-17 00:25:10.173077 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-17 00:25:10.173090 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-17 00:25:10.173102 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-17 00:25:10.173114 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-17 00:25:10.173125 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-17 00:25:10.173136 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-17 00:25:10.173147 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-17 00:25:10.173165 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-17 00:25:10.173176 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-17 00:25:10.173187 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-17 00:25:10.173198 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-17 00:25:10.173209 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-17 00:25:10.173220 | orchestrator | 2026-04-17 00:25:10.173231 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-17 00:25:10.173242 | orchestrator | Friday 17 April 2026 00:25:05 +0000 (0:00:01.135) 0:00:07.095 ********** 2026-04-17 00:25:10.173252 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:10.173263 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:10.173274 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:10.173284 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:10.173295 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:10.173306 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:10.173317 | orchestrator | 2026-04-17 00:25:10.173328 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-17 00:25:10.173339 | orchestrator | Friday 17 April 2026 00:25:07 +0000 (0:00:01.311) 0:00:08.407 ********** 2026-04-17 00:25:10.173350 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173361 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173372 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173383 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173394 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173421 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-17 00:25:10.173433 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173443 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173454 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173465 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173476 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173487 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-17 00:25:10.173497 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173509 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-17 00:25:10.173519 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-17 00:25:10.173530 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-17 00:25:10.173541 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173551 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173562 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173573 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173584 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-17 00:25:10.173594 | orchestrator | 2026-04-17 00:25:10.173606 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-17 00:25:10.173617 | orchestrator | Friday 17 April 2026 00:25:08 +0000 (0:00:01.179) 0:00:09.586 ********** 2026-04-17 00:25:10.173627 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:10.173638 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:10.173698 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:10.173711 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:10.173722 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:10.173733 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:10.173751 | orchestrator | 2026-04-17 00:25:10.173762 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-17 00:25:10.173773 | orchestrator | Friday 17 April 2026 00:25:08 +0000 (0:00:00.138) 0:00:09.724 ********** 2026-04-17 00:25:10.173784 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:10.173795 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:10.173805 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:10.173816 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:10.173827 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:10.173838 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:10.173848 | orchestrator | 2026-04-17 00:25:10.173859 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-17 00:25:10.173870 | orchestrator | Friday 17 April 2026 00:25:08 +0000 (0:00:00.158) 0:00:09.883 ********** 2026-04-17 00:25:10.173881 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:10.173892 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:10.173903 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:10.173913 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:10.173924 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:10.173935 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:10.173946 | orchestrator | 2026-04-17 00:25:10.173956 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-17 00:25:10.173967 | orchestrator | Friday 17 April 2026 00:25:09 +0000 (0:00:00.466) 0:00:10.349 ********** 2026-04-17 00:25:10.173978 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:10.173989 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:10.173999 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:10.174010 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:10.174119 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:10.174131 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:10.174142 | orchestrator | 2026-04-17 00:25:10.174153 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-17 00:25:10.174164 | orchestrator | Friday 17 April 2026 00:25:09 +0000 (0:00:00.175) 0:00:10.525 ********** 2026-04-17 00:25:10.174175 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 00:25:10.174187 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:10.174197 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:25:10.174208 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:10.174219 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-17 00:25:10.174230 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 00:25:10.174241 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 00:25:10.174252 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:10.174263 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:10.174274 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:10.174284 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-17 00:25:10.174295 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:10.174306 | orchestrator | 2026-04-17 00:25:10.174317 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-17 00:25:10.174327 | orchestrator | Friday 17 April 2026 00:25:09 +0000 (0:00:00.633) 0:00:11.158 ********** 2026-04-17 00:25:10.174338 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:10.174349 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:10.174359 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:10.174370 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:10.174381 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:10.174391 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:10.174402 | orchestrator | 2026-04-17 00:25:10.174413 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-17 00:25:10.174424 | orchestrator | Friday 17 April 2026 00:25:10 +0000 (0:00:00.148) 0:00:11.307 ********** 2026-04-17 00:25:10.174435 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:10.174446 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:10.174464 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:10.174474 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:10.174494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:11.284093 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:11.284185 | orchestrator | 2026-04-17 00:25:11.284202 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-17 00:25:11.284215 | orchestrator | Friday 17 April 2026 00:25:10 +0000 (0:00:00.147) 0:00:11.455 ********** 2026-04-17 00:25:11.284244 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:11.284256 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:11.284267 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:11.284278 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:11.284289 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:11.284300 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:11.284311 | orchestrator | 2026-04-17 00:25:11.284322 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-17 00:25:11.284333 | orchestrator | Friday 17 April 2026 00:25:10 +0000 (0:00:00.132) 0:00:11.587 ********** 2026-04-17 00:25:11.284344 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:25:11.284355 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:25:11.284366 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:25:11.284377 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:25:11.284388 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:25:11.284399 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:25:11.284410 | orchestrator | 2026-04-17 00:25:11.284421 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-17 00:25:11.284432 | orchestrator | Friday 17 April 2026 00:25:10 +0000 (0:00:00.582) 0:00:12.169 ********** 2026-04-17 00:25:11.284443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:25:11.284454 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:25:11.284465 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:25:11.284475 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:25:11.284486 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:25:11.284497 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:25:11.284508 | orchestrator | 2026-04-17 00:25:11.284519 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:25:11.284531 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284564 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284586 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284597 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284608 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284619 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 00:25:11.284630 | orchestrator | 2026-04-17 00:25:11.284641 | orchestrator | 2026-04-17 00:25:11.284695 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:25:11.284709 | orchestrator | Friday 17 April 2026 00:25:11 +0000 (0:00:00.209) 0:00:12.379 ********** 2026-04-17 00:25:11.284721 | orchestrator | =============================================================================== 2026-04-17 00:25:11.284733 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2026-04-17 00:25:11.284746 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2026-04-17 00:25:11.284778 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2026-04-17 00:25:11.284792 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2026-04-17 00:25:11.284804 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2026-04-17 00:25:11.284816 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2026-04-17 00:25:11.284828 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.63s 2026-04-17 00:25:11.284840 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-04-17 00:25:11.284852 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.58s 2026-04-17 00:25:11.284864 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.47s 2026-04-17 00:25:11.284876 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-17 00:25:11.284888 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-17 00:25:11.284901 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-17 00:25:11.284913 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-17 00:25:11.284925 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-17 00:25:11.284939 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-04-17 00:25:11.284951 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-17 00:25:11.284963 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-17 00:25:11.284975 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.12s 2026-04-17 00:25:11.449152 | orchestrator | + osism apply --environment custom facts 2026-04-17 00:25:12.660839 | orchestrator | 2026-04-17 00:25:12 | INFO  | Trying to run play facts in environment custom 2026-04-17 00:25:22.774130 | orchestrator | 2026-04-17 00:25:22 | INFO  | Prepare task for execution of facts. 2026-04-17 00:25:22.847013 | orchestrator | 2026-04-17 00:25:22 | INFO  | Task 2c05b1f0-2b7f-4846-b654-0bd3d4e907c0 (facts) was prepared for execution. 2026-04-17 00:25:22.847109 | orchestrator | 2026-04-17 00:25:22 | INFO  | It takes a moment until task 2c05b1f0-2b7f-4846-b654-0bd3d4e907c0 (facts) has been started and output is visible here. 2026-04-17 00:26:06.724889 | orchestrator | 2026-04-17 00:26:06.725024 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-17 00:26:06.725046 | orchestrator | 2026-04-17 00:26:06.725057 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 00:26:06.725072 | orchestrator | Friday 17 April 2026 00:25:25 +0000 (0:00:00.086) 0:00:00.086 ********** 2026-04-17 00:26:06.725084 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:06.725098 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:06.725114 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725128 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725141 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:06.725154 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:06.725163 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725171 | orchestrator | 2026-04-17 00:26:06.725179 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-17 00:26:06.725188 | orchestrator | Friday 17 April 2026 00:25:27 +0000 (0:00:01.302) 0:00:01.388 ********** 2026-04-17 00:26:06.725196 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:06.725204 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725212 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725220 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:06.725229 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:06.725237 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725270 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:06.725278 | orchestrator | 2026-04-17 00:26:06.725290 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-17 00:26:06.725303 | orchestrator | 2026-04-17 00:26:06.725315 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 00:26:06.725326 | orchestrator | Friday 17 April 2026 00:25:28 +0000 (0:00:01.257) 0:00:02.646 ********** 2026-04-17 00:26:06.725337 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.725349 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.725360 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.725371 | orchestrator | 2026-04-17 00:26:06.725383 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 00:26:06.725396 | orchestrator | Friday 17 April 2026 00:25:28 +0000 (0:00:00.071) 0:00:02.717 ********** 2026-04-17 00:26:06.725407 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.725420 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.725434 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.725447 | orchestrator | 2026-04-17 00:26:06.725461 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 00:26:06.725475 | orchestrator | Friday 17 April 2026 00:25:28 +0000 (0:00:00.160) 0:00:02.878 ********** 2026-04-17 00:26:06.725484 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.725492 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.725500 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.725507 | orchestrator | 2026-04-17 00:26:06.725515 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 00:26:06.725523 | orchestrator | Friday 17 April 2026 00:25:28 +0000 (0:00:00.177) 0:00:03.055 ********** 2026-04-17 00:26:06.725532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:06.725542 | orchestrator | 2026-04-17 00:26:06.725550 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 00:26:06.725558 | orchestrator | Friday 17 April 2026 00:25:28 +0000 (0:00:00.119) 0:00:03.175 ********** 2026-04-17 00:26:06.725566 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.725573 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.725581 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.725589 | orchestrator | 2026-04-17 00:26:06.725597 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 00:26:06.725605 | orchestrator | Friday 17 April 2026 00:25:29 +0000 (0:00:00.548) 0:00:03.723 ********** 2026-04-17 00:26:06.725613 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:06.725655 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:06.725663 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:06.725671 | orchestrator | 2026-04-17 00:26:06.725680 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 00:26:06.725687 | orchestrator | Friday 17 April 2026 00:25:29 +0000 (0:00:00.102) 0:00:03.825 ********** 2026-04-17 00:26:06.725696 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725704 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725712 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725720 | orchestrator | 2026-04-17 00:26:06.725727 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 00:26:06.725736 | orchestrator | Friday 17 April 2026 00:25:30 +0000 (0:00:01.032) 0:00:04.858 ********** 2026-04-17 00:26:06.725743 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.725751 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.725759 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.725767 | orchestrator | 2026-04-17 00:26:06.725775 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 00:26:06.725784 | orchestrator | Friday 17 April 2026 00:25:30 +0000 (0:00:00.421) 0:00:05.279 ********** 2026-04-17 00:26:06.725792 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725800 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725816 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725824 | orchestrator | 2026-04-17 00:26:06.725832 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 00:26:06.725839 | orchestrator | Friday 17 April 2026 00:25:31 +0000 (0:00:01.075) 0:00:06.355 ********** 2026-04-17 00:26:06.725847 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725856 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725863 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725871 | orchestrator | 2026-04-17 00:26:06.725879 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-17 00:26:06.725887 | orchestrator | Friday 17 April 2026 00:25:48 +0000 (0:00:16.719) 0:00:23.075 ********** 2026-04-17 00:26:06.725895 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:06.725903 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:06.725911 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:06.725919 | orchestrator | 2026-04-17 00:26:06.725927 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-17 00:26:06.725952 | orchestrator | Friday 17 April 2026 00:25:48 +0000 (0:00:00.098) 0:00:23.173 ********** 2026-04-17 00:26:06.725960 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:06.725968 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:06.725976 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:06.725984 | orchestrator | 2026-04-17 00:26:06.725992 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-17 00:26:06.726000 | orchestrator | Friday 17 April 2026 00:25:56 +0000 (0:00:08.128) 0:00:31.301 ********** 2026-04-17 00:26:06.726008 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.726067 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.726076 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.726084 | orchestrator | 2026-04-17 00:26:06.726092 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-17 00:26:06.726100 | orchestrator | Friday 17 April 2026 00:25:57 +0000 (0:00:00.429) 0:00:31.731 ********** 2026-04-17 00:26:06.726151 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-17 00:26:06.726160 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-17 00:26:06.726169 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-17 00:26:06.726176 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-17 00:26:06.726188 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-17 00:26:06.726197 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-17 00:26:06.726204 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-17 00:26:06.726212 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-17 00:26:06.726220 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-17 00:26:06.726229 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-17 00:26:06.726236 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-17 00:26:06.726244 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-17 00:26:06.726252 | orchestrator | 2026-04-17 00:26:06.726260 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 00:26:06.726268 | orchestrator | Friday 17 April 2026 00:26:00 +0000 (0:00:03.431) 0:00:35.163 ********** 2026-04-17 00:26:06.726276 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.726284 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.726291 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.726299 | orchestrator | 2026-04-17 00:26:06.726307 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 00:26:06.726315 | orchestrator | 2026-04-17 00:26:06.726323 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:26:06.726331 | orchestrator | Friday 17 April 2026 00:26:02 +0000 (0:00:01.226) 0:00:36.390 ********** 2026-04-17 00:26:06.726346 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:06.726353 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:06.726361 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:06.726369 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:06.726377 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:06.726385 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:06.726393 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:06.726400 | orchestrator | 2026-04-17 00:26:06.726408 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:26:06.726417 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:26:06.726425 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:26:06.726435 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:26:06.726443 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:26:06.726450 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:26:06.726459 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:26:06.726466 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:26:06.726474 | orchestrator | 2026-04-17 00:26:06.726482 | orchestrator | 2026-04-17 00:26:06.726490 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:26:06.726498 | orchestrator | Friday 17 April 2026 00:26:06 +0000 (0:00:04.671) 0:00:41.062 ********** 2026-04-17 00:26:06.726506 | orchestrator | =============================================================================== 2026-04-17 00:26:06.726514 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.72s 2026-04-17 00:26:06.726522 | orchestrator | Install required packages (Debian) -------------------------------------- 8.13s 2026-04-17 00:26:06.726530 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2026-04-17 00:26:06.726538 | orchestrator | Copy fact files --------------------------------------------------------- 3.43s 2026-04-17 00:26:06.726546 | orchestrator | Create custom facts directory ------------------------------------------- 1.30s 2026-04-17 00:26:06.726554 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-04-17 00:26:06.726567 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.23s 2026-04-17 00:26:06.886582 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-04-17 00:26:06.886739 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-04-17 00:26:06.886756 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.55s 2026-04-17 00:26:06.886768 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-17 00:26:06.886779 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2026-04-17 00:26:06.886790 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-04-17 00:26:06.886801 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2026-04-17 00:26:06.886812 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-04-17 00:26:06.886823 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-04-17 00:26:06.886834 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-17 00:26:06.886886 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-04-17 00:26:07.058776 | orchestrator | + osism apply bootstrap 2026-04-17 00:26:18.336117 | orchestrator | 2026-04-17 00:26:18 | INFO  | Prepare task for execution of bootstrap. 2026-04-17 00:26:18.408406 | orchestrator | 2026-04-17 00:26:18 | INFO  | Task 71497681-639b-42b8-ba29-80ad19fb2cb4 (bootstrap) was prepared for execution. 2026-04-17 00:26:18.408519 | orchestrator | 2026-04-17 00:26:18 | INFO  | It takes a moment until task 71497681-639b-42b8-ba29-80ad19fb2cb4 (bootstrap) has been started and output is visible here. 2026-04-17 00:26:33.766349 | orchestrator | 2026-04-17 00:26:33.766452 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-17 00:26:33.766466 | orchestrator | 2026-04-17 00:26:33.766476 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-17 00:26:33.766486 | orchestrator | Friday 17 April 2026 00:26:21 +0000 (0:00:00.190) 0:00:00.190 ********** 2026-04-17 00:26:33.766495 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:33.766506 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:33.766515 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:33.766524 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:33.766533 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:33.766542 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:33.766551 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:33.766560 | orchestrator | 2026-04-17 00:26:33.766570 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 00:26:33.766579 | orchestrator | 2026-04-17 00:26:33.766588 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:26:33.766597 | orchestrator | Friday 17 April 2026 00:26:21 +0000 (0:00:00.304) 0:00:00.495 ********** 2026-04-17 00:26:33.766652 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:33.766661 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:33.766671 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:33.766680 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:33.766689 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:33.766697 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:33.766706 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:33.766715 | orchestrator | 2026-04-17 00:26:33.766724 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-17 00:26:33.766733 | orchestrator | 2026-04-17 00:26:33.766741 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:26:33.766750 | orchestrator | Friday 17 April 2026 00:26:26 +0000 (0:00:04.715) 0:00:05.210 ********** 2026-04-17 00:26:33.766760 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-17 00:26:33.766769 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 00:26:33.766778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-17 00:26:33.766787 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-17 00:26:33.766795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:26:33.766804 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-17 00:26:33.766813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:26:33.766821 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-17 00:26:33.766830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:26:33.766839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-17 00:26:33.766848 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-17 00:26:33.766856 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-17 00:26:33.766865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-17 00:26:33.766873 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-17 00:26:33.766905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 00:26:33.766915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 00:26:33.766923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-17 00:26:33.766932 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:33.766941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-17 00:26:33.766949 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 00:26:33.766958 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 00:26:33.766966 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 00:26:33.766975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 00:26:33.766983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-17 00:26:33.766992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-17 00:26:33.767001 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:33.767009 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-17 00:26:33.767018 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 00:26:33.767027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 00:26:33.767035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-17 00:26:33.767044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 00:26:33.767052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-17 00:26:33.767061 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 00:26:33.767069 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-17 00:26:33.767078 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:33.767087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-17 00:26:33.767095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:26:33.767104 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 00:26:33.767113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-17 00:26:33.767122 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-17 00:26:33.767130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:33.767139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 00:26:33.767148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:26:33.767157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-17 00:26:33.767165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 00:26:33.767174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:26:33.767183 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:33.767207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-17 00:26:33.767217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 00:26:33.767225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-17 00:26:33.767234 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:33.767242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 00:26:33.767251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-17 00:26:33.767259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-17 00:26:33.767268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-17 00:26:33.767276 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:33.767285 | orchestrator | 2026-04-17 00:26:33.767294 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-17 00:26:33.767303 | orchestrator | 2026-04-17 00:26:33.767312 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-17 00:26:33.767320 | orchestrator | Friday 17 April 2026 00:26:27 +0000 (0:00:00.479) 0:00:05.690 ********** 2026-04-17 00:26:33.767336 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:33.767345 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:33.767354 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:33.767362 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:33.767371 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:33.767379 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:33.767388 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:33.767396 | orchestrator | 2026-04-17 00:26:33.767405 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-17 00:26:33.767414 | orchestrator | Friday 17 April 2026 00:26:28 +0000 (0:00:01.230) 0:00:06.920 ********** 2026-04-17 00:26:33.767422 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:33.767431 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:33.767440 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:33.767448 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:33.767457 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:33.767481 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:33.767490 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:33.767498 | orchestrator | 2026-04-17 00:26:33.767507 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-17 00:26:33.767516 | orchestrator | Friday 17 April 2026 00:26:29 +0000 (0:00:01.187) 0:00:08.108 ********** 2026-04-17 00:26:33.767525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:33.767537 | orchestrator | 2026-04-17 00:26:33.767545 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-17 00:26:33.767554 | orchestrator | Friday 17 April 2026 00:26:29 +0000 (0:00:00.254) 0:00:08.362 ********** 2026-04-17 00:26:33.767563 | orchestrator | changed: [testbed-manager] 2026-04-17 00:26:33.767572 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:33.767581 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:33.767589 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:33.767684 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:33.767696 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:33.767705 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:33.767714 | orchestrator | 2026-04-17 00:26:33.767723 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-17 00:26:33.767732 | orchestrator | Friday 17 April 2026 00:26:31 +0000 (0:00:01.559) 0:00:09.921 ********** 2026-04-17 00:26:33.767740 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:33.767750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:33.767761 | orchestrator | 2026-04-17 00:26:33.767770 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-17 00:26:33.767778 | orchestrator | Friday 17 April 2026 00:26:31 +0000 (0:00:00.256) 0:00:10.178 ********** 2026-04-17 00:26:33.767787 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:33.767796 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:33.767805 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:33.767813 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:33.767822 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:33.767831 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:33.767840 | orchestrator | 2026-04-17 00:26:33.767849 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-17 00:26:33.767857 | orchestrator | Friday 17 April 2026 00:26:32 +0000 (0:00:00.992) 0:00:11.170 ********** 2026-04-17 00:26:33.767866 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:33.767875 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:33.767883 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:33.767892 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:33.767907 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:33.767916 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:33.767925 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:33.767934 | orchestrator | 2026-04-17 00:26:33.767942 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-17 00:26:33.767951 | orchestrator | Friday 17 April 2026 00:26:33 +0000 (0:00:00.595) 0:00:11.766 ********** 2026-04-17 00:26:33.767960 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:33.767973 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:33.767982 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:33.767991 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:33.767999 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:33.768008 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:33.768016 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:33.768025 | orchestrator | 2026-04-17 00:26:33.768034 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-17 00:26:33.768044 | orchestrator | Friday 17 April 2026 00:26:33 +0000 (0:00:00.411) 0:00:12.177 ********** 2026-04-17 00:26:33.768052 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:33.768061 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:33.768076 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:45.635355 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:45.635467 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:45.635483 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:45.635495 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:45.635507 | orchestrator | 2026-04-17 00:26:45.635520 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-17 00:26:45.635533 | orchestrator | Friday 17 April 2026 00:26:33 +0000 (0:00:00.203) 0:00:12.381 ********** 2026-04-17 00:26:45.635547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:45.635575 | orchestrator | 2026-04-17 00:26:45.635587 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-17 00:26:45.635661 | orchestrator | Friday 17 April 2026 00:26:34 +0000 (0:00:00.285) 0:00:12.666 ********** 2026-04-17 00:26:45.635682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:45.635701 | orchestrator | 2026-04-17 00:26:45.635720 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-17 00:26:45.635733 | orchestrator | Friday 17 April 2026 00:26:34 +0000 (0:00:00.340) 0:00:13.007 ********** 2026-04-17 00:26:45.635744 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.635756 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.635767 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.635778 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.635789 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.635799 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.635810 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.635821 | orchestrator | 2026-04-17 00:26:45.635833 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-17 00:26:45.635846 | orchestrator | Friday 17 April 2026 00:26:36 +0000 (0:00:01.727) 0:00:14.734 ********** 2026-04-17 00:26:45.635859 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:45.635872 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:45.635885 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:45.635898 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:45.635911 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:45.635923 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:45.635935 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:45.635976 | orchestrator | 2026-04-17 00:26:45.635990 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-17 00:26:45.636002 | orchestrator | Friday 17 April 2026 00:26:36 +0000 (0:00:00.208) 0:00:14.942 ********** 2026-04-17 00:26:45.636015 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636027 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.636039 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.636051 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.636063 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.636076 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.636088 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.636100 | orchestrator | 2026-04-17 00:26:45.636112 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-17 00:26:45.636124 | orchestrator | Friday 17 April 2026 00:26:36 +0000 (0:00:00.546) 0:00:15.489 ********** 2026-04-17 00:26:45.636136 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:45.636149 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:45.636161 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:45.636174 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:45.636186 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:45.636199 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:45.636212 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:45.636223 | orchestrator | 2026-04-17 00:26:45.636234 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-17 00:26:45.636246 | orchestrator | Friday 17 April 2026 00:26:37 +0000 (0:00:00.220) 0:00:15.709 ********** 2026-04-17 00:26:45.636257 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636268 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:45.636280 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:45.636291 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:45.636301 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:45.636312 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:45.636323 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:45.636334 | orchestrator | 2026-04-17 00:26:45.636345 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-17 00:26:45.636356 | orchestrator | Friday 17 April 2026 00:26:37 +0000 (0:00:00.514) 0:00:16.224 ********** 2026-04-17 00:26:45.636367 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636378 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:45.636389 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:45.636400 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:45.636411 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:45.636422 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:45.636432 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:45.636443 | orchestrator | 2026-04-17 00:26:45.636454 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-17 00:26:45.636473 | orchestrator | Friday 17 April 2026 00:26:38 +0000 (0:00:01.150) 0:00:17.375 ********** 2026-04-17 00:26:45.636485 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636496 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.636506 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.636517 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.636528 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.636539 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.636550 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.636560 | orchestrator | 2026-04-17 00:26:45.636571 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-17 00:26:45.636582 | orchestrator | Friday 17 April 2026 00:26:39 +0000 (0:00:01.006) 0:00:18.381 ********** 2026-04-17 00:26:45.636637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:45.636651 | orchestrator | 2026-04-17 00:26:45.636662 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-17 00:26:45.636681 | orchestrator | Friday 17 April 2026 00:26:40 +0000 (0:00:00.323) 0:00:18.705 ********** 2026-04-17 00:26:45.636692 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:45.636703 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:45.636714 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:26:45.636725 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:45.636736 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:26:45.636747 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:45.636757 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:26:45.636768 | orchestrator | 2026-04-17 00:26:45.636779 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-17 00:26:45.636790 | orchestrator | Friday 17 April 2026 00:26:41 +0000 (0:00:01.267) 0:00:19.973 ********** 2026-04-17 00:26:45.636801 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636811 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.636822 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.636833 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.636844 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.636854 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.636865 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.636875 | orchestrator | 2026-04-17 00:26:45.636886 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-17 00:26:45.636897 | orchestrator | Friday 17 April 2026 00:26:41 +0000 (0:00:00.213) 0:00:20.187 ********** 2026-04-17 00:26:45.636908 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.636918 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.636929 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.636940 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.636951 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.636962 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.636972 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.636983 | orchestrator | 2026-04-17 00:26:45.636994 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-17 00:26:45.637005 | orchestrator | Friday 17 April 2026 00:26:41 +0000 (0:00:00.196) 0:00:20.384 ********** 2026-04-17 00:26:45.637016 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.637026 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.637037 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.637048 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.637058 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.637069 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.637079 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.637090 | orchestrator | 2026-04-17 00:26:45.637101 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-17 00:26:45.637112 | orchestrator | Friday 17 April 2026 00:26:42 +0000 (0:00:00.194) 0:00:20.578 ********** 2026-04-17 00:26:45.637124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:26:45.637136 | orchestrator | 2026-04-17 00:26:45.637147 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-17 00:26:45.637157 | orchestrator | Friday 17 April 2026 00:26:42 +0000 (0:00:00.265) 0:00:20.843 ********** 2026-04-17 00:26:45.637168 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.637179 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.637190 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.637201 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.637211 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.637222 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.637232 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.637243 | orchestrator | 2026-04-17 00:26:45.637254 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-17 00:26:45.637265 | orchestrator | Friday 17 April 2026 00:26:42 +0000 (0:00:00.505) 0:00:21.349 ********** 2026-04-17 00:26:45.637283 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:26:45.637294 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:26:45.637305 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:26:45.637316 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:26:45.637327 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:26:45.637337 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:26:45.637348 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:26:45.637359 | orchestrator | 2026-04-17 00:26:45.637370 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-17 00:26:45.637381 | orchestrator | Friday 17 April 2026 00:26:43 +0000 (0:00:00.207) 0:00:21.557 ********** 2026-04-17 00:26:45.637392 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.637403 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:26:45.637413 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:45.637424 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.637435 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:26:45.637446 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.637457 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.637468 | orchestrator | 2026-04-17 00:26:45.637479 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-17 00:26:45.637489 | orchestrator | Friday 17 April 2026 00:26:44 +0000 (0:00:01.097) 0:00:22.654 ********** 2026-04-17 00:26:45.637500 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.637511 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:26:45.637522 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:26:45.637533 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:26:45.637544 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.637554 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.637565 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:26:45.637576 | orchestrator | 2026-04-17 00:26:45.637586 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-17 00:26:45.637617 | orchestrator | Friday 17 April 2026 00:26:44 +0000 (0:00:00.536) 0:00:23.191 ********** 2026-04-17 00:26:45.637629 | orchestrator | ok: [testbed-manager] 2026-04-17 00:26:45.637640 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:26:45.637650 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:26:45.637661 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:26:45.637679 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.254396 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.254520 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.254540 | orchestrator | 2026-04-17 00:27:25.254554 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-17 00:27:25.254567 | orchestrator | Friday 17 April 2026 00:26:45 +0000 (0:00:01.054) 0:00:24.246 ********** 2026-04-17 00:27:25.254659 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.254671 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.254682 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.254693 | orchestrator | changed: [testbed-manager] 2026-04-17 00:27:25.254705 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.254716 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:27:25.254726 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.254738 | orchestrator | 2026-04-17 00:27:25.254766 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-17 00:27:25.254779 | orchestrator | Friday 17 April 2026 00:27:02 +0000 (0:00:16.690) 0:00:40.937 ********** 2026-04-17 00:27:25.254789 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.254801 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.254813 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.254824 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.254835 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.254845 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.254856 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.254867 | orchestrator | 2026-04-17 00:27:25.254878 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-17 00:27:25.254915 | orchestrator | Friday 17 April 2026 00:27:02 +0000 (0:00:00.200) 0:00:41.137 ********** 2026-04-17 00:27:25.254929 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.254943 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.254955 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.254968 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.254980 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.254992 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.255005 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.255016 | orchestrator | 2026-04-17 00:27:25.255029 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-17 00:27:25.255041 | orchestrator | Friday 17 April 2026 00:27:02 +0000 (0:00:00.222) 0:00:41.360 ********** 2026-04-17 00:27:25.255054 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.255066 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.255078 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.255091 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.255103 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.255115 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.255127 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.255139 | orchestrator | 2026-04-17 00:27:25.255151 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-17 00:27:25.255163 | orchestrator | Friday 17 April 2026 00:27:03 +0000 (0:00:00.203) 0:00:41.564 ********** 2026-04-17 00:27:25.255178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:27:25.255194 | orchestrator | 2026-04-17 00:27:25.255207 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-17 00:27:25.255219 | orchestrator | Friday 17 April 2026 00:27:03 +0000 (0:00:00.289) 0:00:41.853 ********** 2026-04-17 00:27:25.255230 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.255241 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.255252 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.255263 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.255274 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.255284 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.255295 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.255306 | orchestrator | 2026-04-17 00:27:25.255317 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-17 00:27:25.255328 | orchestrator | Friday 17 April 2026 00:27:05 +0000 (0:00:01.987) 0:00:43.840 ********** 2026-04-17 00:27:25.255339 | orchestrator | changed: [testbed-manager] 2026-04-17 00:27:25.255350 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:27:25.255360 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.255371 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:27:25.255382 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.255396 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:27:25.255415 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:27:25.255433 | orchestrator | 2026-04-17 00:27:25.255452 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-17 00:27:25.255471 | orchestrator | Friday 17 April 2026 00:27:06 +0000 (0:00:01.109) 0:00:44.949 ********** 2026-04-17 00:27:25.255487 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.255498 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.255509 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.255519 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.255530 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.255541 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.255551 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.255562 | orchestrator | 2026-04-17 00:27:25.255601 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-17 00:27:25.255619 | orchestrator | Friday 17 April 2026 00:27:07 +0000 (0:00:00.846) 0:00:45.796 ********** 2026-04-17 00:27:25.255649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:27:25.255682 | orchestrator | 2026-04-17 00:27:25.255702 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-17 00:27:25.255723 | orchestrator | Friday 17 April 2026 00:27:07 +0000 (0:00:00.310) 0:00:46.106 ********** 2026-04-17 00:27:25.255742 | orchestrator | changed: [testbed-manager] 2026-04-17 00:27:25.255762 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.255781 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:27:25.255801 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.255821 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:27:25.255841 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:27:25.255861 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:27:25.255881 | orchestrator | 2026-04-17 00:27:25.255929 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-17 00:27:25.255951 | orchestrator | Friday 17 April 2026 00:27:08 +0000 (0:00:01.077) 0:00:47.184 ********** 2026-04-17 00:27:25.255973 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:27:25.255992 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:27:25.256013 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:27:25.256035 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:27:25.256055 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:27:25.256075 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:27:25.256094 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:27:25.256110 | orchestrator | 2026-04-17 00:27:25.256121 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-17 00:27:25.256132 | orchestrator | Friday 17 April 2026 00:27:08 +0000 (0:00:00.242) 0:00:47.426 ********** 2026-04-17 00:27:25.256143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:27:25.256155 | orchestrator | 2026-04-17 00:27:25.256165 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-17 00:27:25.256176 | orchestrator | Friday 17 April 2026 00:27:09 +0000 (0:00:00.298) 0:00:47.724 ********** 2026-04-17 00:27:25.256187 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.256198 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.256208 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.256220 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.256239 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.256257 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.256274 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.256294 | orchestrator | 2026-04-17 00:27:25.256312 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-17 00:27:25.256331 | orchestrator | Friday 17 April 2026 00:27:10 +0000 (0:00:01.695) 0:00:49.419 ********** 2026-04-17 00:27:25.256351 | orchestrator | changed: [testbed-manager] 2026-04-17 00:27:25.256371 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.256389 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:27:25.256409 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:27:25.256429 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.256448 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:27:25.256465 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:27:25.256476 | orchestrator | 2026-04-17 00:27:25.256486 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-17 00:27:25.256498 | orchestrator | Friday 17 April 2026 00:27:12 +0000 (0:00:01.150) 0:00:50.570 ********** 2026-04-17 00:27:25.256508 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:27:25.256519 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:27:25.256530 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:27:25.256541 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:27:25.256561 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:27:25.256639 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:27:25.256653 | orchestrator | changed: [testbed-manager] 2026-04-17 00:27:25.256664 | orchestrator | 2026-04-17 00:27:25.256675 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-17 00:27:25.256686 | orchestrator | Friday 17 April 2026 00:27:22 +0000 (0:00:10.898) 0:01:01.468 ********** 2026-04-17 00:27:25.256697 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.256708 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.256719 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.256729 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.256740 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.256751 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.256761 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.256772 | orchestrator | 2026-04-17 00:27:25.256783 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-17 00:27:25.256794 | orchestrator | Friday 17 April 2026 00:27:23 +0000 (0:00:00.708) 0:01:02.177 ********** 2026-04-17 00:27:25.256805 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.256815 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.256826 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.256836 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.256847 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.256857 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.256868 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.256879 | orchestrator | 2026-04-17 00:27:25.256890 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-17 00:27:25.256901 | orchestrator | Friday 17 April 2026 00:27:24 +0000 (0:00:00.872) 0:01:03.049 ********** 2026-04-17 00:27:25.256911 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.256922 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.256933 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.256943 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.256954 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.256965 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.256975 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.256986 | orchestrator | 2026-04-17 00:27:25.256997 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-17 00:27:25.257008 | orchestrator | Friday 17 April 2026 00:27:24 +0000 (0:00:00.215) 0:01:03.265 ********** 2026-04-17 00:27:25.257019 | orchestrator | ok: [testbed-manager] 2026-04-17 00:27:25.257029 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:27:25.257040 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:27:25.257050 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:27:25.257061 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:27:25.257079 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:27:25.257090 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:27:25.257101 | orchestrator | 2026-04-17 00:27:25.257112 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-17 00:27:25.257123 | orchestrator | Friday 17 April 2026 00:27:24 +0000 (0:00:00.211) 0:01:03.476 ********** 2026-04-17 00:27:25.257135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:27:25.257146 | orchestrator | 2026-04-17 00:27:25.257168 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-17 00:29:41.795107 | orchestrator | Friday 17 April 2026 00:27:25 +0000 (0:00:00.291) 0:01:03.767 ********** 2026-04-17 00:29:41.795235 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.795252 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.795262 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.795272 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.795281 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.795290 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.795300 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.795338 | orchestrator | 2026-04-17 00:29:41.795350 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-17 00:29:41.795360 | orchestrator | Friday 17 April 2026 00:27:27 +0000 (0:00:02.394) 0:01:06.162 ********** 2026-04-17 00:29:41.795370 | orchestrator | changed: [testbed-manager] 2026-04-17 00:29:41.795381 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:29:41.795390 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:29:41.795399 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:29:41.795408 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:29:41.795417 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:29:41.795427 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:29:41.795436 | orchestrator | 2026-04-17 00:29:41.795446 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-17 00:29:41.795458 | orchestrator | Friday 17 April 2026 00:27:28 +0000 (0:00:00.527) 0:01:06.689 ********** 2026-04-17 00:29:41.795468 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.795476 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.795485 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.795512 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.795521 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.795530 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.795538 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.795546 | orchestrator | 2026-04-17 00:29:41.795555 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-17 00:29:41.795563 | orchestrator | Friday 17 April 2026 00:27:28 +0000 (0:00:00.249) 0:01:06.938 ********** 2026-04-17 00:29:41.795571 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.795579 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.795587 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.795594 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.795602 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.795611 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.795619 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.795627 | orchestrator | 2026-04-17 00:29:41.795635 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-17 00:29:41.795643 | orchestrator | Friday 17 April 2026 00:27:29 +0000 (0:00:01.292) 0:01:08.231 ********** 2026-04-17 00:29:41.795651 | orchestrator | changed: [testbed-manager] 2026-04-17 00:29:41.795659 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:29:41.795666 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:29:41.795674 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:29:41.795682 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:29:41.795690 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:29:41.795698 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:29:41.795706 | orchestrator | 2026-04-17 00:29:41.795714 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-17 00:29:41.795731 | orchestrator | Friday 17 April 2026 00:27:32 +0000 (0:00:02.308) 0:01:10.539 ********** 2026-04-17 00:29:41.795739 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.795748 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.795755 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.795763 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.795772 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.795780 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.795788 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.795796 | orchestrator | 2026-04-17 00:29:41.795804 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-17 00:29:41.795813 | orchestrator | Friday 17 April 2026 00:27:35 +0000 (0:00:03.290) 0:01:13.830 ********** 2026-04-17 00:29:41.795821 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.795829 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.795837 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.795845 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.795853 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.795862 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.795880 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.795888 | orchestrator | 2026-04-17 00:29:41.795896 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-17 00:29:41.795905 | orchestrator | Friday 17 April 2026 00:28:15 +0000 (0:00:39.963) 0:01:53.793 ********** 2026-04-17 00:29:41.795912 | orchestrator | changed: [testbed-manager] 2026-04-17 00:29:41.795920 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:29:41.795929 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:29:41.795936 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:29:41.795944 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:29:41.795952 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:29:41.795961 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:29:41.795968 | orchestrator | 2026-04-17 00:29:41.795977 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-17 00:29:41.795985 | orchestrator | Friday 17 April 2026 00:29:28 +0000 (0:01:13.235) 0:03:07.029 ********** 2026-04-17 00:29:41.795993 | orchestrator | ok: [testbed-manager] 2026-04-17 00:29:41.796001 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.796008 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.796016 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.796023 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.796031 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.796038 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.796046 | orchestrator | 2026-04-17 00:29:41.796054 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-17 00:29:41.796061 | orchestrator | Friday 17 April 2026 00:29:30 +0000 (0:00:02.032) 0:03:09.061 ********** 2026-04-17 00:29:41.796069 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:29:41.796076 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:29:41.796084 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:29:41.796092 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:29:41.796100 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:29:41.796108 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:29:41.796116 | orchestrator | changed: [testbed-manager] 2026-04-17 00:29:41.796123 | orchestrator | 2026-04-17 00:29:41.796131 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-17 00:29:41.796138 | orchestrator | Friday 17 April 2026 00:29:40 +0000 (0:00:10.251) 0:03:19.313 ********** 2026-04-17 00:29:41.796181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-17 00:29:41.796199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-17 00:29:41.796219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-17 00:29:41.796229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-17 00:29:41.796246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-17 00:29:41.796254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-17 00:29:41.796262 | orchestrator | 2026-04-17 00:29:41.796274 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-17 00:29:41.796282 | orchestrator | Friday 17 April 2026 00:29:41 +0000 (0:00:00.307) 0:03:19.620 ********** 2026-04-17 00:29:41.796291 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 00:29:41.796299 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:29:41.796307 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 00:29:41.796315 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 00:29:41.796324 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:29:41.796332 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:29:41.796340 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-17 00:29:41.796348 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:29:41.796358 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:29:41.796366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:29:41.796374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:29:41.796382 | orchestrator | 2026-04-17 00:29:41.796390 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-17 00:29:41.796398 | orchestrator | Friday 17 April 2026 00:29:41 +0000 (0:00:00.634) 0:03:20.255 ********** 2026-04-17 00:29:41.796411 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 00:29:41.796421 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 00:29:41.796429 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 00:29:41.796437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 00:29:41.796446 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 00:29:41.796462 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 00:29:53.031613 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 00:29:53.031694 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 00:29:53.031701 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 00:29:53.031706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 00:29:53.031712 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:29:53.031717 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 00:29:53.031722 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 00:29:53.031743 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 00:29:53.031747 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 00:29:53.031751 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 00:29:53.031756 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 00:29:53.031760 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 00:29:53.031764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 00:29:53.031768 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 00:29:53.031772 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 00:29:53.031776 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 00:29:53.031780 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 00:29:53.031784 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 00:29:53.031788 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 00:29:53.031792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 00:29:53.031796 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 00:29:53.031800 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 00:29:53.031804 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:29:53.031808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 00:29:53.031812 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 00:29:53.031816 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 00:29:53.031819 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:29:53.031823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-17 00:29:53.031827 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-17 00:29:53.031831 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-17 00:29:53.031835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-17 00:29:53.031838 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-17 00:29:53.031842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-17 00:29:53.031846 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-17 00:29:53.031850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-17 00:29:53.031854 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-17 00:29:53.031857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-17 00:29:53.031885 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:29:53.031889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 00:29:53.031893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 00:29:53.031900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-17 00:29:53.031904 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 00:29:53.031909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 00:29:53.031924 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-17 00:29:53.031929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 00:29:53.031932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 00:29:53.031940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-17 00:29:53.031944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031952 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031955 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031959 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-17 00:29:53.031963 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 00:29:53.031967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 00:29:53.031971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 00:29:53.031975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 00:29:53.031978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 00:29:53.031982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 00:29:53.031986 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 00:29:53.031990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-17 00:29:53.031993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 00:29:53.031997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 00:29:53.032001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-17 00:29:53.032005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 00:29:53.032009 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-17 00:29:53.032013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-17 00:29:53.032016 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-17 00:29:53.032020 | orchestrator | 2026-04-17 00:29:53.032025 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-17 00:29:53.032029 | orchestrator | Friday 17 April 2026 00:29:50 +0000 (0:00:08.269) 0:03:28.524 ********** 2026-04-17 00:29:53.032032 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032036 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032040 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032047 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032051 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032055 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032059 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-17 00:29:53.032063 | orchestrator | 2026-04-17 00:29:53.032067 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-17 00:29:53.032070 | orchestrator | Friday 17 April 2026 00:29:51 +0000 (0:00:01.500) 0:03:30.025 ********** 2026-04-17 00:29:53.032074 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:29:53.032078 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:29:53.032086 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:29:53.032090 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:29:53.032094 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:29:53.032098 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:29:53.032101 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:29:53.032105 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:29:53.032109 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:29:53.032113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:29:53.032120 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:30:05.041972 | orchestrator | 2026-04-17 00:30:05.042129 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-17 00:30:05.042143 | orchestrator | Friday 17 April 2026 00:29:53 +0000 (0:00:01.557) 0:03:31.583 ********** 2026-04-17 00:30:05.042153 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:30:05.042159 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:30:05.042165 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:30:05.042169 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:30:05.042174 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:30:05.042178 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:30:05.042182 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-17 00:30:05.042186 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:30:05.042190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:30:05.042195 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:30:05.042199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-17 00:30:05.042203 | orchestrator | 2026-04-17 00:30:05.042207 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-17 00:30:05.042211 | orchestrator | Friday 17 April 2026 00:29:53 +0000 (0:00:00.598) 0:03:32.181 ********** 2026-04-17 00:30:05.042215 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 00:30:05.042219 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:30:05.042223 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 00:30:05.042227 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 00:30:05.042249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:30:05.042253 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:30:05.042257 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-17 00:30:05.042261 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:30:05.042265 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 00:30:05.042268 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 00:30:05.042272 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-17 00:30:05.042276 | orchestrator | 2026-04-17 00:30:05.042280 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-17 00:30:05.042284 | orchestrator | Friday 17 April 2026 00:29:54 +0000 (0:00:00.683) 0:03:32.865 ********** 2026-04-17 00:30:05.042288 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:30:05.042291 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:30:05.042295 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:30:05.042300 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:30:05.042304 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:30:05.042307 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:30:05.042311 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:30:05.042315 | orchestrator | 2026-04-17 00:30:05.042319 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-17 00:30:05.042323 | orchestrator | Friday 17 April 2026 00:29:54 +0000 (0:00:00.216) 0:03:33.081 ********** 2026-04-17 00:30:05.042327 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:30:05.042331 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:30:05.042335 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:30:05.042339 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:30:05.042343 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:30:05.042347 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:30:05.042350 | orchestrator | ok: [testbed-manager] 2026-04-17 00:30:05.042354 | orchestrator | 2026-04-17 00:30:05.042358 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-17 00:30:05.042362 | orchestrator | Friday 17 April 2026 00:29:59 +0000 (0:00:04.821) 0:03:37.903 ********** 2026-04-17 00:30:05.042366 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-17 00:30:05.042370 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-17 00:30:05.042374 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:30:05.042378 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-17 00:30:05.042382 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:30:05.042385 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:30:05.042389 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-17 00:30:05.042393 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-17 00:30:05.042397 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:30:05.042413 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-17 00:30:05.042417 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:30:05.042421 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:30:05.042424 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-17 00:30:05.042428 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:30:05.042432 | orchestrator | 2026-04-17 00:30:05.042436 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-17 00:30:05.042440 | orchestrator | Friday 17 April 2026 00:29:59 +0000 (0:00:00.277) 0:03:38.180 ********** 2026-04-17 00:30:05.042444 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-17 00:30:05.042448 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-17 00:30:05.042452 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-17 00:30:05.042467 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-17 00:30:05.042475 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-17 00:30:05.042479 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-17 00:30:05.042521 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-17 00:30:05.042525 | orchestrator | 2026-04-17 00:30:05.042529 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-17 00:30:05.042533 | orchestrator | Friday 17 April 2026 00:30:00 +0000 (0:00:01.055) 0:03:39.236 ********** 2026-04-17 00:30:05.042539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:30:05.042544 | orchestrator | 2026-04-17 00:30:05.042548 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-17 00:30:05.042552 | orchestrator | Friday 17 April 2026 00:30:01 +0000 (0:00:00.384) 0:03:39.620 ********** 2026-04-17 00:30:05.042556 | orchestrator | ok: [testbed-manager] 2026-04-17 00:30:05.042560 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:30:05.042564 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:30:05.042567 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:30:05.042571 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:30:05.042575 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:30:05.042579 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:30:05.042582 | orchestrator | 2026-04-17 00:30:05.042586 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-17 00:30:05.042590 | orchestrator | Friday 17 April 2026 00:30:02 +0000 (0:00:01.470) 0:03:41.091 ********** 2026-04-17 00:30:05.042594 | orchestrator | ok: [testbed-manager] 2026-04-17 00:30:05.042597 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:30:05.042601 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:30:05.042605 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:30:05.042609 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:30:05.042612 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:30:05.042616 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:30:05.042620 | orchestrator | 2026-04-17 00:30:05.042624 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-17 00:30:05.042627 | orchestrator | Friday 17 April 2026 00:30:03 +0000 (0:00:00.580) 0:03:41.671 ********** 2026-04-17 00:30:05.042631 | orchestrator | changed: [testbed-manager] 2026-04-17 00:30:05.042635 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:30:05.042639 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:30:05.042642 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:30:05.042646 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:30:05.042650 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:30:05.042654 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:30:05.042658 | orchestrator | 2026-04-17 00:30:05.042661 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-17 00:30:05.042665 | orchestrator | Friday 17 April 2026 00:30:03 +0000 (0:00:00.641) 0:03:42.313 ********** 2026-04-17 00:30:05.042669 | orchestrator | ok: [testbed-manager] 2026-04-17 00:30:05.042673 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:30:05.042677 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:30:05.042680 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:30:05.042684 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:30:05.042688 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:30:05.042691 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:30:05.042695 | orchestrator | 2026-04-17 00:30:05.042699 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-17 00:30:05.042703 | orchestrator | Friday 17 April 2026 00:30:04 +0000 (0:00:00.685) 0:03:42.998 ********** 2026-04-17 00:30:05.042709 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384237.559, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:05.042722 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384345.6633632, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:05.042726 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384310.297984, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:05.042743 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384356.5986922, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253294 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384314.2556627, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253392 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384394.6698253, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253409 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776384339.6750872, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253421 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253459 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253519 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253533 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253571 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253584 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253596 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 00:30:10.253608 | orchestrator | 2026-04-17 00:30:10.253621 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-17 00:30:10.253634 | orchestrator | Friday 17 April 2026 00:30:05 +0000 (0:00:00.977) 0:03:43.976 ********** 2026-04-17 00:30:10.253645 | orchestrator | changed: [testbed-manager] 2026-04-17 00:30:10.253666 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:30:10.253677 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:30:10.253688 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:30:10.253699 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:30:10.253710 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:30:10.253720 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:30:10.253731 | orchestrator | 2026-04-17 00:30:10.253743 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-17 00:30:10.253753 | orchestrator | Friday 17 April 2026 00:30:06 +0000 (0:00:01.108) 0:03:45.085 ********** 2026-04-17 00:30:10.253764 | orchestrator | changed: [testbed-manager] 2026-04-17 00:30:10.253775 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:30:10.253786 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:30:10.253797 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:30:10.253807 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:30:10.253818 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:30:10.253829 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:30:10.253840 | orchestrator | 2026-04-17 00:30:10.253854 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-17 00:30:10.253866 | orchestrator | Friday 17 April 2026 00:30:07 +0000 (0:00:01.156) 0:03:46.241 ********** 2026-04-17 00:30:10.253878 | orchestrator | changed: [testbed-manager] 2026-04-17 00:30:10.253890 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:30:10.253902 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:30:10.253914 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:30:10.253926 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:30:10.253938 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:30:10.253951 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:30:10.253962 | orchestrator | 2026-04-17 00:30:10.253975 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-17 00:30:10.253988 | orchestrator | Friday 17 April 2026 00:30:09 +0000 (0:00:01.318) 0:03:47.560 ********** 2026-04-17 00:30:10.254005 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:30:10.254068 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:30:10.254082 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:30:10.254094 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:30:10.254107 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:30:10.254119 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:30:10.254132 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:30:10.254144 | orchestrator | 2026-04-17 00:30:10.254156 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-17 00:30:10.254169 | orchestrator | Friday 17 April 2026 00:30:09 +0000 (0:00:00.226) 0:03:47.786 ********** 2026-04-17 00:30:10.254182 | orchestrator | ok: [testbed-manager] 2026-04-17 00:30:10.254196 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:30:10.254207 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:30:10.254218 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:30:10.254229 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:30:10.254240 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:30:10.254251 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:30:10.254261 | orchestrator | 2026-04-17 00:30:10.254273 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-17 00:30:10.254283 | orchestrator | Friday 17 April 2026 00:30:09 +0000 (0:00:00.633) 0:03:48.420 ********** 2026-04-17 00:30:10.254296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:30:10.254308 | orchestrator | 2026-04-17 00:30:10.254320 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-17 00:30:10.254338 | orchestrator | Friday 17 April 2026 00:30:10 +0000 (0:00:00.349) 0:03:48.770 ********** 2026-04-17 00:31:28.557903 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558079 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:28.558130 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:28.558143 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:28.558154 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:28.558165 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:28.558176 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:28.558187 | orchestrator | 2026-04-17 00:31:28.558199 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-17 00:31:28.558212 | orchestrator | Friday 17 April 2026 00:30:19 +0000 (0:00:08.761) 0:03:57.532 ********** 2026-04-17 00:31:28.558224 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558235 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558246 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558257 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558267 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558278 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558289 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558300 | orchestrator | 2026-04-17 00:31:28.558311 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-17 00:31:28.558322 | orchestrator | Friday 17 April 2026 00:30:20 +0000 (0:00:01.547) 0:03:59.079 ********** 2026-04-17 00:31:28.558333 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558344 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558355 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558366 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558376 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558387 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558398 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558408 | orchestrator | 2026-04-17 00:31:28.558422 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-17 00:31:28.558460 | orchestrator | Friday 17 April 2026 00:30:21 +0000 (0:00:01.152) 0:04:00.231 ********** 2026-04-17 00:31:28.558473 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558487 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558499 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558511 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558523 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558535 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558548 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558560 | orchestrator | 2026-04-17 00:31:28.558574 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-17 00:31:28.558586 | orchestrator | Friday 17 April 2026 00:30:21 +0000 (0:00:00.263) 0:04:00.494 ********** 2026-04-17 00:31:28.558597 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558608 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558618 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558629 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558640 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558650 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558661 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558672 | orchestrator | 2026-04-17 00:31:28.558683 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-17 00:31:28.558694 | orchestrator | Friday 17 April 2026 00:30:22 +0000 (0:00:00.283) 0:04:00.777 ********** 2026-04-17 00:31:28.558705 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558716 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558726 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558737 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558748 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558759 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558769 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558780 | orchestrator | 2026-04-17 00:31:28.558791 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-17 00:31:28.558802 | orchestrator | Friday 17 April 2026 00:30:22 +0000 (0:00:00.279) 0:04:01.057 ********** 2026-04-17 00:31:28.558813 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.558833 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.558844 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.558855 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.558865 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.558876 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.558887 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.558898 | orchestrator | 2026-04-17 00:31:28.558909 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-17 00:31:28.558920 | orchestrator | Friday 17 April 2026 00:30:27 +0000 (0:00:04.672) 0:04:05.730 ********** 2026-04-17 00:31:28.558933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:31:28.558947 | orchestrator | 2026-04-17 00:31:28.558958 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-17 00:31:28.558986 | orchestrator | Friday 17 April 2026 00:30:27 +0000 (0:00:00.352) 0:04:06.083 ********** 2026-04-17 00:31:28.558998 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559009 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-17 00:31:28.559020 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:28.559031 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559043 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-17 00:31:28.559054 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559065 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-17 00:31:28.559076 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:28.559087 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559097 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:28.559109 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-17 00:31:28.559120 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559131 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-17 00:31:28.559142 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:28.559153 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559164 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-17 00:31:28.559195 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:28.559206 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:28.559217 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-17 00:31:28.559228 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-17 00:31:28.559240 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:28.559250 | orchestrator | 2026-04-17 00:31:28.559262 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-17 00:31:28.559273 | orchestrator | Friday 17 April 2026 00:30:27 +0000 (0:00:00.305) 0:04:06.388 ********** 2026-04-17 00:31:28.559284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:31:28.559295 | orchestrator | 2026-04-17 00:31:28.559306 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-17 00:31:28.559317 | orchestrator | Friday 17 April 2026 00:30:28 +0000 (0:00:00.457) 0:04:06.846 ********** 2026-04-17 00:31:28.559328 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-17 00:31:28.559339 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-17 00:31:28.559349 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:28.559360 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:28.559371 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-17 00:31:28.559390 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-17 00:31:28.559400 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:28.559411 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-17 00:31:28.559450 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:28.559470 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-17 00:31:28.559486 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:28.559504 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:28.559522 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-17 00:31:28.559539 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:28.559555 | orchestrator | 2026-04-17 00:31:28.559572 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-17 00:31:28.559589 | orchestrator | Friday 17 April 2026 00:30:28 +0000 (0:00:00.310) 0:04:07.156 ********** 2026-04-17 00:31:28.559606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:31:28.559624 | orchestrator | 2026-04-17 00:31:28.559642 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-17 00:31:28.559659 | orchestrator | Friday 17 April 2026 00:30:29 +0000 (0:00:00.394) 0:04:07.551 ********** 2026-04-17 00:31:28.559678 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:28.559697 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:28.559714 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:28.559732 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:28.559748 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:28.559759 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:28.559770 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:28.559781 | orchestrator | 2026-04-17 00:31:28.559792 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-17 00:31:28.559803 | orchestrator | Friday 17 April 2026 00:31:02 +0000 (0:00:33.347) 0:04:40.898 ********** 2026-04-17 00:31:28.559814 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:28.559825 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:28.559835 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:28.559846 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:28.559857 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:28.559867 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:28.559878 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:28.559889 | orchestrator | 2026-04-17 00:31:28.559908 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-17 00:31:28.559919 | orchestrator | Friday 17 April 2026 00:31:11 +0000 (0:00:08.988) 0:04:49.886 ********** 2026-04-17 00:31:28.559930 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:28.559941 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:28.559952 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:28.559962 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:28.559973 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:28.559984 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:28.559995 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:28.560005 | orchestrator | 2026-04-17 00:31:28.560016 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-17 00:31:28.560027 | orchestrator | Friday 17 April 2026 00:31:19 +0000 (0:00:08.569) 0:04:58.456 ********** 2026-04-17 00:31:28.560038 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:28.560049 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:28.560060 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:28.560071 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:28.560081 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:28.560092 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:28.560103 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:28.560123 | orchestrator | 2026-04-17 00:31:28.560134 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-17 00:31:28.560145 | orchestrator | Friday 17 April 2026 00:31:21 +0000 (0:00:02.040) 0:05:00.496 ********** 2026-04-17 00:31:28.560156 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:28.560167 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:28.560178 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:28.560188 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:28.560199 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:28.560210 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:28.560221 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:28.560232 | orchestrator | 2026-04-17 00:31:28.560253 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-17 00:31:39.554471 | orchestrator | Friday 17 April 2026 00:31:28 +0000 (0:00:06.574) 0:05:07.071 ********** 2026-04-17 00:31:39.554565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:31:39.554579 | orchestrator | 2026-04-17 00:31:39.554589 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-17 00:31:39.554597 | orchestrator | Friday 17 April 2026 00:31:28 +0000 (0:00:00.394) 0:05:07.466 ********** 2026-04-17 00:31:39.554606 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:39.554622 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:39.554637 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:39.554650 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:39.554664 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:39.554679 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:39.554695 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:39.554704 | orchestrator | 2026-04-17 00:31:39.554712 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-17 00:31:39.554720 | orchestrator | Friday 17 April 2026 00:31:29 +0000 (0:00:00.813) 0:05:08.279 ********** 2026-04-17 00:31:39.554729 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:39.554737 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:39.554745 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:39.554754 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:39.554762 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:39.554770 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:39.554778 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:39.554786 | orchestrator | 2026-04-17 00:31:39.554794 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-17 00:31:39.554802 | orchestrator | Friday 17 April 2026 00:31:31 +0000 (0:00:01.801) 0:05:10.080 ********** 2026-04-17 00:31:39.554810 | orchestrator | changed: [testbed-manager] 2026-04-17 00:31:39.554818 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:31:39.554826 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:31:39.554834 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:31:39.554842 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:31:39.554850 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:31:39.554858 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:31:39.554866 | orchestrator | 2026-04-17 00:31:39.554874 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-17 00:31:39.554882 | orchestrator | Friday 17 April 2026 00:31:32 +0000 (0:00:00.767) 0:05:10.847 ********** 2026-04-17 00:31:39.554890 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.554898 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.554906 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.554913 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:39.554922 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:39.554929 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:39.554937 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:39.554945 | orchestrator | 2026-04-17 00:31:39.554973 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-17 00:31:39.554983 | orchestrator | Friday 17 April 2026 00:31:32 +0000 (0:00:00.262) 0:05:11.110 ********** 2026-04-17 00:31:39.554992 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.555001 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.555011 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.555019 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:39.555028 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:39.555037 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:39.555046 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:39.555055 | orchestrator | 2026-04-17 00:31:39.555064 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-17 00:31:39.555073 | orchestrator | Friday 17 April 2026 00:31:32 +0000 (0:00:00.382) 0:05:11.492 ********** 2026-04-17 00:31:39.555082 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:39.555091 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:39.555100 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:39.555109 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:39.555118 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:39.555127 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:39.555136 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:39.555144 | orchestrator | 2026-04-17 00:31:39.555171 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-17 00:31:39.555186 | orchestrator | Friday 17 April 2026 00:31:33 +0000 (0:00:00.411) 0:05:11.903 ********** 2026-04-17 00:31:39.555200 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.555213 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.555227 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.555242 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:39.555256 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:39.555267 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:39.555276 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:39.555285 | orchestrator | 2026-04-17 00:31:39.555294 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-17 00:31:39.555304 | orchestrator | Friday 17 April 2026 00:31:33 +0000 (0:00:00.258) 0:05:12.162 ********** 2026-04-17 00:31:39.555314 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:39.555322 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:39.555332 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:39.555340 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:39.555348 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:39.555355 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:39.555363 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:39.555371 | orchestrator | 2026-04-17 00:31:39.555379 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-17 00:31:39.555387 | orchestrator | Friday 17 April 2026 00:31:33 +0000 (0:00:00.289) 0:05:12.452 ********** 2026-04-17 00:31:39.555395 | orchestrator | ok: [testbed-manager] =>  2026-04-17 00:31:39.555403 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555411 | orchestrator | ok: [testbed-node-0] =>  2026-04-17 00:31:39.555446 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555460 | orchestrator | ok: [testbed-node-1] =>  2026-04-17 00:31:39.555473 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555486 | orchestrator | ok: [testbed-node-2] =>  2026-04-17 00:31:39.555498 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555522 | orchestrator | ok: [testbed-node-3] =>  2026-04-17 00:31:39.555531 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555539 | orchestrator | ok: [testbed-node-4] =>  2026-04-17 00:31:39.555546 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555554 | orchestrator | ok: [testbed-node-5] =>  2026-04-17 00:31:39.555562 | orchestrator |  docker_version: 5:27.5.1 2026-04-17 00:31:39.555570 | orchestrator | 2026-04-17 00:31:39.555578 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-17 00:31:39.555586 | orchestrator | Friday 17 April 2026 00:31:34 +0000 (0:00:00.240) 0:05:12.692 ********** 2026-04-17 00:31:39.555602 | orchestrator | ok: [testbed-manager] =>  2026-04-17 00:31:39.555610 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555618 | orchestrator | ok: [testbed-node-0] =>  2026-04-17 00:31:39.555626 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555634 | orchestrator | ok: [testbed-node-1] =>  2026-04-17 00:31:39.555642 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555650 | orchestrator | ok: [testbed-node-2] =>  2026-04-17 00:31:39.555658 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555666 | orchestrator | ok: [testbed-node-3] =>  2026-04-17 00:31:39.555673 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555681 | orchestrator | ok: [testbed-node-4] =>  2026-04-17 00:31:39.555689 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555697 | orchestrator | ok: [testbed-node-5] =>  2026-04-17 00:31:39.555705 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-17 00:31:39.555713 | orchestrator | 2026-04-17 00:31:39.555721 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-17 00:31:39.555729 | orchestrator | Friday 17 April 2026 00:31:34 +0000 (0:00:00.262) 0:05:12.954 ********** 2026-04-17 00:31:39.555737 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.555744 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.555752 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.555760 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:39.555768 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:39.555776 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:39.555784 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:39.555792 | orchestrator | 2026-04-17 00:31:39.555800 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-17 00:31:39.555808 | orchestrator | Friday 17 April 2026 00:31:34 +0000 (0:00:00.280) 0:05:13.235 ********** 2026-04-17 00:31:39.555816 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.555824 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.555834 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.555847 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:31:39.555861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:31:39.555874 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:31:39.555887 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:31:39.555895 | orchestrator | 2026-04-17 00:31:39.555903 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-17 00:31:39.555911 | orchestrator | Friday 17 April 2026 00:31:34 +0000 (0:00:00.251) 0:05:13.487 ********** 2026-04-17 00:31:39.555921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:31:39.555932 | orchestrator | 2026-04-17 00:31:39.555945 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-17 00:31:39.555960 | orchestrator | Friday 17 April 2026 00:31:35 +0000 (0:00:00.405) 0:05:13.892 ********** 2026-04-17 00:31:39.555973 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:39.555987 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:39.556001 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:39.556014 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:39.556024 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:39.556032 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:39.556040 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:39.556048 | orchestrator | 2026-04-17 00:31:39.556056 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-17 00:31:39.556064 | orchestrator | Friday 17 April 2026 00:31:36 +0000 (0:00:00.808) 0:05:14.700 ********** 2026-04-17 00:31:39.556072 | orchestrator | ok: [testbed-manager] 2026-04-17 00:31:39.556080 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:31:39.556088 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:31:39.556109 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:31:39.556117 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:31:39.556125 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:31:39.556133 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:31:39.556140 | orchestrator | 2026-04-17 00:31:39.556148 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-17 00:31:39.556157 | orchestrator | Friday 17 April 2026 00:31:39 +0000 (0:00:03.042) 0:05:17.742 ********** 2026-04-17 00:31:39.556165 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-17 00:31:39.556173 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-17 00:31:39.556181 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-17 00:31:39.556189 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-17 00:31:39.556197 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-17 00:31:39.556206 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:31:39.556214 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-17 00:31:39.556222 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-17 00:31:39.556230 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-17 00:31:39.556237 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-17 00:31:39.556245 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:31:39.556253 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-17 00:31:39.556261 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-17 00:31:39.556269 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-17 00:31:39.556277 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:31:39.556285 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-17 00:31:39.556300 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-17 00:32:44.424881 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-17 00:32:44.425067 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:32:44.425093 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-17 00:32:44.425111 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-17 00:32:44.425129 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-17 00:32:44.425147 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:32:44.425165 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:32:44.425184 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-17 00:32:44.425203 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-17 00:32:44.425221 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-17 00:32:44.425241 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:32:44.425261 | orchestrator | 2026-04-17 00:32:44.425284 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-17 00:32:44.425298 | orchestrator | Friday 17 April 2026 00:31:39 +0000 (0:00:00.551) 0:05:18.294 ********** 2026-04-17 00:32:44.425310 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.425322 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.425368 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.425388 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.425407 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.425426 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.425445 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.425464 | orchestrator | 2026-04-17 00:32:44.425482 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-17 00:32:44.425500 | orchestrator | Friday 17 April 2026 00:31:47 +0000 (0:00:07.579) 0:05:25.873 ********** 2026-04-17 00:32:44.425519 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.425537 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.425555 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.425573 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.425634 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.425651 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.425669 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.425686 | orchestrator | 2026-04-17 00:32:44.425703 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-17 00:32:44.425721 | orchestrator | Friday 17 April 2026 00:31:48 +0000 (0:00:01.040) 0:05:26.914 ********** 2026-04-17 00:32:44.425739 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.425756 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.425774 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.425792 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.425810 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.425826 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.425842 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.425858 | orchestrator | 2026-04-17 00:32:44.425875 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-17 00:32:44.425892 | orchestrator | Friday 17 April 2026 00:31:57 +0000 (0:00:09.175) 0:05:36.089 ********** 2026-04-17 00:32:44.425908 | orchestrator | changed: [testbed-manager] 2026-04-17 00:32:44.425925 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.425943 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.425961 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.425980 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.425998 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.426150 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.426174 | orchestrator | 2026-04-17 00:32:44.426191 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-17 00:32:44.426208 | orchestrator | Friday 17 April 2026 00:32:01 +0000 (0:00:03.494) 0:05:39.584 ********** 2026-04-17 00:32:44.426226 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.426244 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.426272 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.426291 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.426310 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.426354 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.426373 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.426389 | orchestrator | 2026-04-17 00:32:44.426407 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-17 00:32:44.426426 | orchestrator | Friday 17 April 2026 00:32:02 +0000 (0:00:01.287) 0:05:40.871 ********** 2026-04-17 00:32:44.426444 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.426462 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.426481 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.426499 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.426517 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.426535 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.426547 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.426558 | orchestrator | 2026-04-17 00:32:44.426569 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-17 00:32:44.426581 | orchestrator | Friday 17 April 2026 00:32:03 +0000 (0:00:01.302) 0:05:42.174 ********** 2026-04-17 00:32:44.426592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:32:44.426603 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:32:44.426613 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:32:44.426625 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:32:44.426636 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:32:44.426647 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:32:44.426657 | orchestrator | changed: [testbed-manager] 2026-04-17 00:32:44.426668 | orchestrator | 2026-04-17 00:32:44.426679 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-17 00:32:44.426689 | orchestrator | Friday 17 April 2026 00:32:04 +0000 (0:00:00.564) 0:05:42.738 ********** 2026-04-17 00:32:44.426701 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.426729 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.426740 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.426751 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.426762 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.426773 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.426783 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.426794 | orchestrator | 2026-04-17 00:32:44.426805 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-17 00:32:44.426845 | orchestrator | Friday 17 April 2026 00:32:14 +0000 (0:00:10.167) 0:05:52.906 ********** 2026-04-17 00:32:44.426857 | orchestrator | changed: [testbed-manager] 2026-04-17 00:32:44.426868 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.426879 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.426890 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.426900 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.426911 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.426922 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.426932 | orchestrator | 2026-04-17 00:32:44.426943 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-17 00:32:44.426954 | orchestrator | Friday 17 April 2026 00:32:15 +0000 (0:00:01.115) 0:05:54.021 ********** 2026-04-17 00:32:44.426965 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.426976 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.426987 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.426998 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.427008 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.427019 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.427030 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.427041 | orchestrator | 2026-04-17 00:32:44.427051 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-17 00:32:44.427062 | orchestrator | Friday 17 April 2026 00:32:25 +0000 (0:00:10.231) 0:06:04.253 ********** 2026-04-17 00:32:44.427073 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.427084 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.427095 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.427106 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.427117 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.427127 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.427138 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.427149 | orchestrator | 2026-04-17 00:32:44.427221 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-17 00:32:44.427234 | orchestrator | Friday 17 April 2026 00:32:37 +0000 (0:00:11.605) 0:06:15.859 ********** 2026-04-17 00:32:44.427245 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-17 00:32:44.427257 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-17 00:32:44.427268 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-17 00:32:44.427279 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-17 00:32:44.427291 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-17 00:32:44.427302 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-17 00:32:44.427313 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-17 00:32:44.427324 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-17 00:32:44.427378 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-17 00:32:44.427390 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-17 00:32:44.427401 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-17 00:32:44.427412 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-17 00:32:44.427423 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-17 00:32:44.427434 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-17 00:32:44.427445 | orchestrator | 2026-04-17 00:32:44.427457 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-17 00:32:44.427488 | orchestrator | Friday 17 April 2026 00:32:38 +0000 (0:00:01.417) 0:06:17.276 ********** 2026-04-17 00:32:44.427508 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:32:44.427528 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:32:44.427548 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:32:44.427567 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:32:44.427587 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:32:44.427605 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:32:44.427626 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:32:44.427644 | orchestrator | 2026-04-17 00:32:44.427663 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-17 00:32:44.427683 | orchestrator | Friday 17 April 2026 00:32:39 +0000 (0:00:00.658) 0:06:17.935 ********** 2026-04-17 00:32:44.427705 | orchestrator | ok: [testbed-manager] 2026-04-17 00:32:44.427725 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:32:44.427745 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:32:44.427766 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:32:44.427797 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:32:44.427818 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:32:44.427832 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:32:44.427842 | orchestrator | 2026-04-17 00:32:44.427853 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-17 00:32:44.427866 | orchestrator | Friday 17 April 2026 00:32:43 +0000 (0:00:04.228) 0:06:22.163 ********** 2026-04-17 00:32:44.427877 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:32:44.427888 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:32:44.427899 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:32:44.427909 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:32:44.427920 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:32:44.427931 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:32:44.427941 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:32:44.427952 | orchestrator | 2026-04-17 00:32:44.427964 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-17 00:32:44.427975 | orchestrator | Friday 17 April 2026 00:32:44 +0000 (0:00:00.507) 0:06:22.671 ********** 2026-04-17 00:32:44.427986 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-17 00:32:44.427998 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-17 00:32:44.428008 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:32:44.428019 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-17 00:32:44.428030 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-17 00:32:44.428041 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:32:44.428052 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-17 00:32:44.428063 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-17 00:32:44.428074 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:32:44.428097 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-17 00:33:03.678810 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-17 00:33:03.678958 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:03.678976 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-17 00:33:03.678990 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-17 00:33:03.679001 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:03.679013 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-17 00:33:03.679026 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-17 00:33:03.679037 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:03.679048 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-17 00:33:03.679060 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-17 00:33:03.679071 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:03.679117 | orchestrator | 2026-04-17 00:33:03.679132 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-17 00:33:03.679146 | orchestrator | Friday 17 April 2026 00:32:44 +0000 (0:00:00.544) 0:06:23.215 ********** 2026-04-17 00:33:03.679158 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:03.679169 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:03.679181 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:03.679192 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:03.679204 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:03.679215 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:03.679226 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:03.679238 | orchestrator | 2026-04-17 00:33:03.679250 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-17 00:33:03.679262 | orchestrator | Friday 17 April 2026 00:32:45 +0000 (0:00:00.481) 0:06:23.696 ********** 2026-04-17 00:33:03.679273 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:03.679284 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:03.679321 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:03.679332 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:03.679344 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:03.679355 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:03.679366 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:03.679377 | orchestrator | 2026-04-17 00:33:03.679389 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-17 00:33:03.679400 | orchestrator | Friday 17 April 2026 00:32:45 +0000 (0:00:00.653) 0:06:24.350 ********** 2026-04-17 00:33:03.679412 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:03.679423 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:03.679434 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:03.679445 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:03.679457 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:03.679468 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:03.679479 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:03.679490 | orchestrator | 2026-04-17 00:33:03.679501 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-17 00:33:03.679513 | orchestrator | Friday 17 April 2026 00:32:46 +0000 (0:00:00.519) 0:06:24.869 ********** 2026-04-17 00:33:03.679524 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.679536 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.679547 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.679558 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.679570 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.679581 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.679592 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.679603 | orchestrator | 2026-04-17 00:33:03.679614 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-17 00:33:03.679626 | orchestrator | Friday 17 April 2026 00:32:48 +0000 (0:00:01.948) 0:06:26.818 ********** 2026-04-17 00:33:03.679638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:33:03.679654 | orchestrator | 2026-04-17 00:33:03.679665 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-17 00:33:03.679677 | orchestrator | Friday 17 April 2026 00:32:49 +0000 (0:00:00.837) 0:06:27.655 ********** 2026-04-17 00:33:03.679688 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.679700 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:03.679711 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:03.679722 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:03.679734 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:03.679745 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:03.679757 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:03.679769 | orchestrator | 2026-04-17 00:33:03.679790 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-17 00:33:03.679802 | orchestrator | Friday 17 April 2026 00:32:50 +0000 (0:00:01.028) 0:06:28.684 ********** 2026-04-17 00:33:03.679813 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.679824 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:03.679835 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:03.679847 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:03.679858 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:03.679869 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:03.679880 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:03.679892 | orchestrator | 2026-04-17 00:33:03.679903 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-17 00:33:03.679915 | orchestrator | Friday 17 April 2026 00:32:50 +0000 (0:00:00.835) 0:06:29.519 ********** 2026-04-17 00:33:03.679926 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.679937 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:03.679949 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:03.679960 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:03.679971 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:03.679983 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:03.679994 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:03.680005 | orchestrator | 2026-04-17 00:33:03.680017 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-17 00:33:03.680053 | orchestrator | Friday 17 April 2026 00:32:52 +0000 (0:00:01.270) 0:06:30.790 ********** 2026-04-17 00:33:03.680065 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:03.680076 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.680087 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.680099 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.680110 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.680121 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.680132 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.680144 | orchestrator | 2026-04-17 00:33:03.680155 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-17 00:33:03.680167 | orchestrator | Friday 17 April 2026 00:32:53 +0000 (0:00:01.386) 0:06:32.176 ********** 2026-04-17 00:33:03.680178 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.680189 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:03.680201 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:03.680212 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:03.680223 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:03.680234 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:03.680245 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:03.680257 | orchestrator | 2026-04-17 00:33:03.680268 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-17 00:33:03.680280 | orchestrator | Friday 17 April 2026 00:32:55 +0000 (0:00:01.497) 0:06:33.674 ********** 2026-04-17 00:33:03.680306 | orchestrator | changed: [testbed-manager] 2026-04-17 00:33:03.680318 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:03.680330 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:03.680341 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:03.680353 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:03.680364 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:03.680376 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:03.680387 | orchestrator | 2026-04-17 00:33:03.680399 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-17 00:33:03.680410 | orchestrator | Friday 17 April 2026 00:32:56 +0000 (0:00:01.491) 0:06:35.166 ********** 2026-04-17 00:33:03.680422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:33:03.680435 | orchestrator | 2026-04-17 00:33:03.680446 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-17 00:33:03.680474 | orchestrator | Friday 17 April 2026 00:32:57 +0000 (0:00:00.830) 0:06:35.996 ********** 2026-04-17 00:33:03.680486 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.680497 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.680508 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.680520 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.680531 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.680542 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.680554 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.680566 | orchestrator | 2026-04-17 00:33:03.680577 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-17 00:33:03.680589 | orchestrator | Friday 17 April 2026 00:32:58 +0000 (0:00:01.430) 0:06:37.427 ********** 2026-04-17 00:33:03.680600 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.680611 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.680623 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.680634 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.680646 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.680657 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.680668 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.680680 | orchestrator | 2026-04-17 00:33:03.680691 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-17 00:33:03.680703 | orchestrator | Friday 17 April 2026 00:33:00 +0000 (0:00:01.376) 0:06:38.803 ********** 2026-04-17 00:33:03.680714 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.680725 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.680736 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.680748 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.680759 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.680771 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.680783 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.680794 | orchestrator | 2026-04-17 00:33:03.680806 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-17 00:33:03.680835 | orchestrator | Friday 17 April 2026 00:33:01 +0000 (0:00:01.163) 0:06:39.967 ********** 2026-04-17 00:33:03.680847 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:03.680858 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:03.680870 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:03.680881 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:03.680892 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:03.680904 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:03.680915 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:03.680926 | orchestrator | 2026-04-17 00:33:03.680938 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-17 00:33:03.680949 | orchestrator | Friday 17 April 2026 00:33:02 +0000 (0:00:01.122) 0:06:41.089 ********** 2026-04-17 00:33:03.680961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:33:03.680973 | orchestrator | 2026-04-17 00:33:03.680984 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:03.680996 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.845) 0:06:41.935 ********** 2026-04-17 00:33:03.681007 | orchestrator | 2026-04-17 00:33:03.681018 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:03.681030 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.047) 0:06:41.982 ********** 2026-04-17 00:33:03.681041 | orchestrator | 2026-04-17 00:33:03.681053 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:03.681064 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.170) 0:06:42.153 ********** 2026-04-17 00:33:03.681075 | orchestrator | 2026-04-17 00:33:03.681087 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:03.681106 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.040) 0:06:42.194 ********** 2026-04-17 00:33:30.624678 | orchestrator | 2026-04-17 00:33:30.624795 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:30.624812 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.039) 0:06:42.234 ********** 2026-04-17 00:33:30.624825 | orchestrator | 2026-04-17 00:33:30.624836 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:30.624848 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.044) 0:06:42.278 ********** 2026-04-17 00:33:30.624859 | orchestrator | 2026-04-17 00:33:30.624870 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-17 00:33:30.624881 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.039) 0:06:42.317 ********** 2026-04-17 00:33:30.624892 | orchestrator | 2026-04-17 00:33:30.624902 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-17 00:33:30.624913 | orchestrator | Friday 17 April 2026 00:33:03 +0000 (0:00:00.040) 0:06:42.357 ********** 2026-04-17 00:33:30.624925 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:30.624937 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:30.624948 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:30.624959 | orchestrator | 2026-04-17 00:33:30.624970 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-17 00:33:30.624981 | orchestrator | Friday 17 April 2026 00:33:05 +0000 (0:00:01.268) 0:06:43.626 ********** 2026-04-17 00:33:30.624993 | orchestrator | changed: [testbed-manager] 2026-04-17 00:33:30.625005 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:30.625016 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:30.625026 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:30.625037 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:30.625048 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:30.625059 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:30.625070 | orchestrator | 2026-04-17 00:33:30.625081 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-17 00:33:30.625092 | orchestrator | Friday 17 April 2026 00:33:06 +0000 (0:00:01.291) 0:06:44.917 ********** 2026-04-17 00:33:30.625103 | orchestrator | changed: [testbed-manager] 2026-04-17 00:33:30.625114 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:30.625125 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:30.625136 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:30.625147 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:30.625157 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:30.625168 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:30.625179 | orchestrator | 2026-04-17 00:33:30.625192 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-17 00:33:30.625205 | orchestrator | Friday 17 April 2026 00:33:07 +0000 (0:00:01.281) 0:06:46.199 ********** 2026-04-17 00:33:30.625217 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:30.625229 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:30.625271 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:30.625292 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:30.625311 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:30.625331 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:30.625351 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:30.625369 | orchestrator | 2026-04-17 00:33:30.625385 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-17 00:33:30.625399 | orchestrator | Friday 17 April 2026 00:33:10 +0000 (0:00:02.336) 0:06:48.535 ********** 2026-04-17 00:33:30.625411 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:30.625423 | orchestrator | 2026-04-17 00:33:30.625435 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-17 00:33:30.625448 | orchestrator | Friday 17 April 2026 00:33:10 +0000 (0:00:00.110) 0:06:48.645 ********** 2026-04-17 00:33:30.625460 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.625472 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:30.625512 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:30.625525 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:30.625537 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:30.625550 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:30.625562 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:33:30.625574 | orchestrator | 2026-04-17 00:33:30.625585 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-17 00:33:30.625612 | orchestrator | Friday 17 April 2026 00:33:11 +0000 (0:00:01.272) 0:06:49.918 ********** 2026-04-17 00:33:30.625623 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:30.625634 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:30.625645 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:30.625655 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:30.625666 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:30.625676 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:30.625687 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:30.625698 | orchestrator | 2026-04-17 00:33:30.625708 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-17 00:33:30.625719 | orchestrator | Friday 17 April 2026 00:33:11 +0000 (0:00:00.523) 0:06:50.441 ********** 2026-04-17 00:33:30.625731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:33:30.625744 | orchestrator | 2026-04-17 00:33:30.625755 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-17 00:33:30.625765 | orchestrator | Friday 17 April 2026 00:33:12 +0000 (0:00:00.833) 0:06:51.275 ********** 2026-04-17 00:33:30.625776 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.625787 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:30.625798 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:30.625808 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:30.625819 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:30.625829 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:30.625840 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:30.625850 | orchestrator | 2026-04-17 00:33:30.625861 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-17 00:33:30.625872 | orchestrator | Friday 17 April 2026 00:33:13 +0000 (0:00:01.126) 0:06:52.401 ********** 2026-04-17 00:33:30.625883 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-17 00:33:30.625913 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-17 00:33:30.625925 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-17 00:33:30.625936 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-17 00:33:30.625947 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-17 00:33:30.625957 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-17 00:33:30.625968 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-17 00:33:30.625979 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-17 00:33:30.625990 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-17 00:33:30.626001 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-17 00:33:30.626012 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-17 00:33:30.626084 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-17 00:33:30.626096 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-17 00:33:30.626107 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-17 00:33:30.626117 | orchestrator | 2026-04-17 00:33:30.626128 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-17 00:33:30.626139 | orchestrator | Friday 17 April 2026 00:33:16 +0000 (0:00:02.564) 0:06:54.966 ********** 2026-04-17 00:33:30.626150 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:30.626161 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:30.626180 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:30.626191 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:30.626202 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:30.626213 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:30.626223 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:30.626234 | orchestrator | 2026-04-17 00:33:30.626269 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-17 00:33:30.626281 | orchestrator | Friday 17 April 2026 00:33:16 +0000 (0:00:00.466) 0:06:55.433 ********** 2026-04-17 00:33:30.626293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:33:30.626307 | orchestrator | 2026-04-17 00:33:30.626317 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-17 00:33:30.626328 | orchestrator | Friday 17 April 2026 00:33:17 +0000 (0:00:00.932) 0:06:56.366 ********** 2026-04-17 00:33:30.626339 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.626350 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:30.626360 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:30.626371 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:30.626381 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:30.626392 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:30.626403 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:30.626413 | orchestrator | 2026-04-17 00:33:30.626424 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-17 00:33:30.626441 | orchestrator | Friday 17 April 2026 00:33:18 +0000 (0:00:00.850) 0:06:57.217 ********** 2026-04-17 00:33:30.626459 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.626472 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:30.626482 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:30.626493 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:30.626504 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:30.626515 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:30.626525 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:30.626536 | orchestrator | 2026-04-17 00:33:30.626547 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-17 00:33:30.626558 | orchestrator | Friday 17 April 2026 00:33:19 +0000 (0:00:00.812) 0:06:58.029 ********** 2026-04-17 00:33:30.626569 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:30.626580 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:30.626591 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:30.626601 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:30.626612 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:30.626624 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:30.626634 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:30.626646 | orchestrator | 2026-04-17 00:33:30.626657 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-17 00:33:30.626668 | orchestrator | Friday 17 April 2026 00:33:19 +0000 (0:00:00.492) 0:06:58.522 ********** 2026-04-17 00:33:30.626679 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.626690 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:33:30.626701 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:33:30.626712 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:33:30.626723 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:33:30.626733 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:33:30.626744 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:33:30.626755 | orchestrator | 2026-04-17 00:33:30.626766 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-17 00:33:30.626777 | orchestrator | Friday 17 April 2026 00:33:21 +0000 (0:00:01.556) 0:07:00.078 ********** 2026-04-17 00:33:30.626788 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:33:30.626798 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:33:30.626809 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:33:30.626828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:33:30.626838 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:33:30.626849 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:33:30.626860 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:33:30.626870 | orchestrator | 2026-04-17 00:33:30.626881 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-17 00:33:30.626892 | orchestrator | Friday 17 April 2026 00:33:22 +0000 (0:00:00.612) 0:07:00.691 ********** 2026-04-17 00:33:30.626903 | orchestrator | ok: [testbed-manager] 2026-04-17 00:33:30.626915 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:33:30.626925 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:33:30.626936 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:33:30.626947 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:33:30.626958 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:33:30.626976 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:03.020386 | orchestrator | 2026-04-17 00:34:03.020503 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-17 00:34:03.020521 | orchestrator | Friday 17 April 2026 00:33:30 +0000 (0:00:08.525) 0:07:09.217 ********** 2026-04-17 00:34:03.020534 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.020547 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:03.020559 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:03.020570 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:03.020581 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:03.020592 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:03.020603 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:03.020614 | orchestrator | 2026-04-17 00:34:03.020626 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-17 00:34:03.020637 | orchestrator | Friday 17 April 2026 00:33:31 +0000 (0:00:01.289) 0:07:10.506 ********** 2026-04-17 00:34:03.020648 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.020660 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:03.020671 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:03.020682 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:03.020693 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:03.020704 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:03.020715 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:03.020726 | orchestrator | 2026-04-17 00:34:03.020737 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-17 00:34:03.020749 | orchestrator | Friday 17 April 2026 00:33:33 +0000 (0:00:01.814) 0:07:12.320 ********** 2026-04-17 00:34:03.020760 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.020771 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:03.020782 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:03.020793 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:03.020804 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:03.020815 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:03.020826 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:03.020837 | orchestrator | 2026-04-17 00:34:03.020848 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 00:34:03.020859 | orchestrator | Friday 17 April 2026 00:33:35 +0000 (0:00:01.751) 0:07:14.072 ********** 2026-04-17 00:34:03.020870 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.020881 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.020911 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.020925 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.020939 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.020953 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.020965 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.020978 | orchestrator | 2026-04-17 00:34:03.020991 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 00:34:03.021005 | orchestrator | Friday 17 April 2026 00:33:36 +0000 (0:00:00.863) 0:07:14.935 ********** 2026-04-17 00:34:03.021018 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:34:03.021057 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:34:03.021070 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:34:03.021083 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:34:03.021096 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:34:03.021109 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:34:03.021121 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:34:03.021134 | orchestrator | 2026-04-17 00:34:03.021147 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-17 00:34:03.021161 | orchestrator | Friday 17 April 2026 00:33:37 +0000 (0:00:00.774) 0:07:15.710 ********** 2026-04-17 00:34:03.021174 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:34:03.021187 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:34:03.021226 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:34:03.021239 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:34:03.021252 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:34:03.021265 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:34:03.021278 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:34:03.021289 | orchestrator | 2026-04-17 00:34:03.021300 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-17 00:34:03.021311 | orchestrator | Friday 17 April 2026 00:33:37 +0000 (0:00:00.671) 0:07:16.382 ********** 2026-04-17 00:34:03.021323 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.021334 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.021345 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.021356 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.021372 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.021383 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.021394 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.021405 | orchestrator | 2026-04-17 00:34:03.021429 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-17 00:34:03.021440 | orchestrator | Friday 17 April 2026 00:33:38 +0000 (0:00:00.485) 0:07:16.868 ********** 2026-04-17 00:34:03.021451 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.021474 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.021486 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.021497 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.021507 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.021518 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.021529 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.021540 | orchestrator | 2026-04-17 00:34:03.021552 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-17 00:34:03.021563 | orchestrator | Friday 17 April 2026 00:33:38 +0000 (0:00:00.490) 0:07:17.358 ********** 2026-04-17 00:34:03.021574 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.021585 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.021595 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.021606 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.021617 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.021628 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.021639 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.021650 | orchestrator | 2026-04-17 00:34:03.021661 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-17 00:34:03.021672 | orchestrator | Friday 17 April 2026 00:33:39 +0000 (0:00:00.507) 0:07:17.866 ********** 2026-04-17 00:34:03.021683 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.021694 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.021705 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.021716 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.021726 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.021737 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.021748 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.021759 | orchestrator | 2026-04-17 00:34:03.021790 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-17 00:34:03.021802 | orchestrator | Friday 17 April 2026 00:33:44 +0000 (0:00:05.386) 0:07:23.252 ********** 2026-04-17 00:34:03.021821 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:34:03.021833 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:34:03.021844 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:34:03.021855 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:34:03.021866 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:34:03.021877 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:34:03.021888 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:34:03.021899 | orchestrator | 2026-04-17 00:34:03.021910 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-17 00:34:03.021922 | orchestrator | Friday 17 April 2026 00:33:45 +0000 (0:00:00.651) 0:07:23.903 ********** 2026-04-17 00:34:03.021935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:03.021948 | orchestrator | 2026-04-17 00:34:03.021960 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-17 00:34:03.021971 | orchestrator | Friday 17 April 2026 00:33:46 +0000 (0:00:00.776) 0:07:24.680 ********** 2026-04-17 00:34:03.021982 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.021993 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.022004 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.022077 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.022090 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.022101 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.022112 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.022123 | orchestrator | 2026-04-17 00:34:03.022135 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-17 00:34:03.022146 | orchestrator | Friday 17 April 2026 00:33:48 +0000 (0:00:02.122) 0:07:26.803 ********** 2026-04-17 00:34:03.022156 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.022167 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.022178 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.022189 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.022216 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.022227 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.022238 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.022249 | orchestrator | 2026-04-17 00:34:03.022260 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-17 00:34:03.022271 | orchestrator | Friday 17 April 2026 00:33:49 +0000 (0:00:01.354) 0:07:28.157 ********** 2026-04-17 00:34:03.022282 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:03.022293 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:03.022304 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:03.022315 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:03.022326 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:03.022337 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:03.022347 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:03.022358 | orchestrator | 2026-04-17 00:34:03.022369 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-17 00:34:03.022380 | orchestrator | Friday 17 April 2026 00:33:50 +0000 (0:00:00.986) 0:07:29.143 ********** 2026-04-17 00:34:03.022392 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022405 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022416 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022428 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022444 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022464 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022475 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-17 00:34:03.022486 | orchestrator | 2026-04-17 00:34:03.022497 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-17 00:34:03.022508 | orchestrator | Friday 17 April 2026 00:33:52 +0000 (0:00:01.684) 0:07:30.828 ********** 2026-04-17 00:34:03.022520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:03.022532 | orchestrator | 2026-04-17 00:34:03.022543 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-17 00:34:03.022554 | orchestrator | Friday 17 April 2026 00:33:53 +0000 (0:00:00.905) 0:07:31.734 ********** 2026-04-17 00:34:03.022565 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:03.022576 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:03.022587 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:03.022598 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:03.022609 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:03.022620 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:03.022631 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:03.022642 | orchestrator | 2026-04-17 00:34:03.022661 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-17 00:34:34.393015 | orchestrator | Friday 17 April 2026 00:34:03 +0000 (0:00:09.803) 0:07:41.538 ********** 2026-04-17 00:34:34.393378 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:34.393476 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:34.393485 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:34.393491 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:34.393497 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:34.393503 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:34.393509 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:34.393515 | orchestrator | 2026-04-17 00:34:34.393523 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-17 00:34:34.393530 | orchestrator | Friday 17 April 2026 00:34:04 +0000 (0:00:01.847) 0:07:43.385 ********** 2026-04-17 00:34:34.393536 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:34.393542 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:34.393547 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:34.393553 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:34.393559 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:34.393564 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:34.393570 | orchestrator | 2026-04-17 00:34:34.393576 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-17 00:34:34.393582 | orchestrator | Friday 17 April 2026 00:34:06 +0000 (0:00:01.578) 0:07:44.964 ********** 2026-04-17 00:34:34.393588 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.393594 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.393600 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.393605 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.393611 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.393616 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.393622 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.393627 | orchestrator | 2026-04-17 00:34:34.393633 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-17 00:34:34.393639 | orchestrator | 2026-04-17 00:34:34.393644 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-17 00:34:34.393650 | orchestrator | Friday 17 April 2026 00:34:07 +0000 (0:00:01.362) 0:07:46.326 ********** 2026-04-17 00:34:34.393684 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:34:34.393690 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:34:34.393695 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:34:34.393701 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:34:34.393706 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:34:34.393711 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:34:34.393717 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:34:34.393722 | orchestrator | 2026-04-17 00:34:34.393728 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-17 00:34:34.393733 | orchestrator | 2026-04-17 00:34:34.393739 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-17 00:34:34.393744 | orchestrator | Friday 17 April 2026 00:34:08 +0000 (0:00:00.508) 0:07:46.834 ********** 2026-04-17 00:34:34.393750 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.393755 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.393761 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.393766 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.393771 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.393777 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.393783 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.393788 | orchestrator | 2026-04-17 00:34:34.393794 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-17 00:34:34.393799 | orchestrator | Friday 17 April 2026 00:34:09 +0000 (0:00:01.335) 0:07:48.170 ********** 2026-04-17 00:34:34.393805 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:34.393810 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:34.393816 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:34.393821 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:34.393826 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:34.393832 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:34.393837 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:34.393843 | orchestrator | 2026-04-17 00:34:34.393848 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-17 00:34:34.393854 | orchestrator | Friday 17 April 2026 00:34:11 +0000 (0:00:01.759) 0:07:49.930 ********** 2026-04-17 00:34:34.393859 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:34:34.393865 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:34:34.393870 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:34:34.393887 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:34:34.393892 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:34:34.393898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:34:34.393903 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:34:34.393909 | orchestrator | 2026-04-17 00:34:34.393915 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-17 00:34:34.393920 | orchestrator | Friday 17 April 2026 00:34:11 +0000 (0:00:00.520) 0:07:50.451 ********** 2026-04-17 00:34:34.393927 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:34.393934 | orchestrator | 2026-04-17 00:34:34.393939 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-17 00:34:34.393945 | orchestrator | Friday 17 April 2026 00:34:12 +0000 (0:00:00.785) 0:07:51.236 ********** 2026-04-17 00:34:34.393952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:34.393960 | orchestrator | 2026-04-17 00:34:34.393966 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-17 00:34:34.393971 | orchestrator | Friday 17 April 2026 00:34:13 +0000 (0:00:00.962) 0:07:52.199 ********** 2026-04-17 00:34:34.393976 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.393982 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.393992 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.393998 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394003 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394008 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394014 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394106 | orchestrator | 2026-04-17 00:34:34.394135 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-17 00:34:34.394142 | orchestrator | Friday 17 April 2026 00:34:23 +0000 (0:00:09.480) 0:08:01.680 ********** 2026-04-17 00:34:34.394176 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394182 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394187 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394193 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394198 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394204 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394209 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394215 | orchestrator | 2026-04-17 00:34:34.394220 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-17 00:34:34.394225 | orchestrator | Friday 17 April 2026 00:34:23 +0000 (0:00:00.829) 0:08:02.509 ********** 2026-04-17 00:34:34.394231 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394237 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394242 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394247 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394253 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394258 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394263 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394269 | orchestrator | 2026-04-17 00:34:34.394274 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-17 00:34:34.394280 | orchestrator | Friday 17 April 2026 00:34:25 +0000 (0:00:01.462) 0:08:03.972 ********** 2026-04-17 00:34:34.394285 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394291 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394296 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394301 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394307 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394312 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394317 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394323 | orchestrator | 2026-04-17 00:34:34.394328 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-17 00:34:34.394334 | orchestrator | Friday 17 April 2026 00:34:27 +0000 (0:00:01.930) 0:08:05.902 ********** 2026-04-17 00:34:34.394339 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394344 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394350 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394355 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394360 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394366 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394371 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394376 | orchestrator | 2026-04-17 00:34:34.394382 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-17 00:34:34.394387 | orchestrator | Friday 17 April 2026 00:34:28 +0000 (0:00:01.194) 0:08:07.097 ********** 2026-04-17 00:34:34.394393 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394398 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394404 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394409 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394414 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394420 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394425 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394431 | orchestrator | 2026-04-17 00:34:34.394436 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-17 00:34:34.394441 | orchestrator | 2026-04-17 00:34:34.394447 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-17 00:34:34.394461 | orchestrator | Friday 17 April 2026 00:34:29 +0000 (0:00:01.126) 0:08:08.224 ********** 2026-04-17 00:34:34.394466 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:34.394472 | orchestrator | 2026-04-17 00:34:34.394477 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-17 00:34:34.394483 | orchestrator | Friday 17 April 2026 00:34:30 +0000 (0:00:00.880) 0:08:09.104 ********** 2026-04-17 00:34:34.394488 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:34.394494 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:34.394499 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:34.394505 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:34.394515 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:34.394520 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:34.394526 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:34.394531 | orchestrator | 2026-04-17 00:34:34.394536 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-17 00:34:34.394542 | orchestrator | Friday 17 April 2026 00:34:31 +0000 (0:00:00.813) 0:08:09.918 ********** 2026-04-17 00:34:34.394547 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:34.394553 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:34.394558 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:34.394564 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:34.394569 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:34.394574 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:34.394580 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:34.394585 | orchestrator | 2026-04-17 00:34:34.394591 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-17 00:34:34.394596 | orchestrator | Friday 17 April 2026 00:34:32 +0000 (0:00:01.260) 0:08:11.179 ********** 2026-04-17 00:34:34.394602 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:34:34.394607 | orchestrator | 2026-04-17 00:34:34.394613 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-17 00:34:34.394618 | orchestrator | Friday 17 April 2026 00:34:33 +0000 (0:00:00.853) 0:08:12.032 ********** 2026-04-17 00:34:34.394623 | orchestrator | ok: [testbed-manager] 2026-04-17 00:34:34.394629 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:34:34.394634 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:34:34.394640 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:34:34.394645 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:34:34.394650 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:34:34.394656 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:34:34.394661 | orchestrator | 2026-04-17 00:34:34.394671 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-17 00:34:35.943027 | orchestrator | Friday 17 April 2026 00:34:34 +0000 (0:00:00.876) 0:08:12.908 ********** 2026-04-17 00:34:35.943117 | orchestrator | changed: [testbed-manager] 2026-04-17 00:34:35.943130 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:34:35.943138 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:34:35.943185 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:34:35.943194 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:34:35.943202 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:34:35.943210 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:34:35.943219 | orchestrator | 2026-04-17 00:34:35.943228 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:34:35.943237 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-17 00:34:35.943247 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 00:34:35.943278 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 00:34:35.943286 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-17 00:34:35.943294 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 00:34:35.943302 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 00:34:35.943310 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-17 00:34:35.943318 | orchestrator | 2026-04-17 00:34:35.943326 | orchestrator | 2026-04-17 00:34:35.943334 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:34:35.943342 | orchestrator | Friday 17 April 2026 00:34:35 +0000 (0:00:01.246) 0:08:14.155 ********** 2026-04-17 00:34:35.943350 | orchestrator | =============================================================================== 2026-04-17 00:34:35.943358 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.24s 2026-04-17 00:34:35.943366 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.96s 2026-04-17 00:34:35.943374 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.35s 2026-04-17 00:34:35.943382 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.69s 2026-04-17 00:34:35.943389 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.61s 2026-04-17 00:34:35.943397 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.90s 2026-04-17 00:34:35.943405 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.25s 2026-04-17 00:34:35.943414 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.23s 2026-04-17 00:34:35.943422 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.17s 2026-04-17 00:34:35.943430 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.80s 2026-04-17 00:34:35.943437 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.48s 2026-04-17 00:34:35.943445 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.18s 2026-04-17 00:34:35.943464 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.99s 2026-04-17 00:34:35.943473 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.76s 2026-04-17 00:34:35.943481 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.57s 2026-04-17 00:34:35.943489 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.53s 2026-04-17 00:34:35.943497 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 8.27s 2026-04-17 00:34:35.943504 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.58s 2026-04-17 00:34:35.943512 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.57s 2026-04-17 00:34:35.943520 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.39s 2026-04-17 00:34:36.116362 | orchestrator | + osism apply fail2ban 2026-04-17 00:34:47.830195 | orchestrator | 2026-04-17 00:34:47 | INFO  | Prepare task for execution of fail2ban. 2026-04-17 00:34:47.911309 | orchestrator | 2026-04-17 00:34:47 | INFO  | Task f2c7a507-3cc0-43fe-ae82-ae931e4eee38 (fail2ban) was prepared for execution. 2026-04-17 00:34:47.911407 | orchestrator | 2026-04-17 00:34:47 | INFO  | It takes a moment until task f2c7a507-3cc0-43fe-ae82-ae931e4eee38 (fail2ban) has been started and output is visible here. 2026-04-17 00:35:08.233124 | orchestrator | 2026-04-17 00:35:08.233220 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-17 00:35:08.233238 | orchestrator | 2026-04-17 00:35:08.233251 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-17 00:35:08.233263 | orchestrator | Friday 17 April 2026 00:34:51 +0000 (0:00:00.288) 0:00:00.288 ********** 2026-04-17 00:35:08.233275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:35:08.233289 | orchestrator | 2026-04-17 00:35:08.233301 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-17 00:35:08.233312 | orchestrator | Friday 17 April 2026 00:34:52 +0000 (0:00:01.017) 0:00:01.306 ********** 2026-04-17 00:35:08.233323 | orchestrator | changed: [testbed-manager] 2026-04-17 00:35:08.233335 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:35:08.233346 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:35:08.233357 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:35:08.233368 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:35:08.233379 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:35:08.233390 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:35:08.233401 | orchestrator | 2026-04-17 00:35:08.233412 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-17 00:35:08.233423 | orchestrator | Friday 17 April 2026 00:35:03 +0000 (0:00:11.350) 0:00:12.656 ********** 2026-04-17 00:35:08.233434 | orchestrator | changed: [testbed-manager] 2026-04-17 00:35:08.233445 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:35:08.233456 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:35:08.233467 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:35:08.233478 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:35:08.233488 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:35:08.233499 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:35:08.233510 | orchestrator | 2026-04-17 00:35:08.233521 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-17 00:35:08.233532 | orchestrator | Friday 17 April 2026 00:35:05 +0000 (0:00:01.671) 0:00:14.327 ********** 2026-04-17 00:35:08.233544 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:08.233555 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:08.233566 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:08.233577 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:08.233588 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:08.233599 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:08.233610 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:08.233621 | orchestrator | 2026-04-17 00:35:08.233632 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-17 00:35:08.233643 | orchestrator | Friday 17 April 2026 00:35:06 +0000 (0:00:01.235) 0:00:15.563 ********** 2026-04-17 00:35:08.233655 | orchestrator | changed: [testbed-manager] 2026-04-17 00:35:08.233669 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:35:08.233682 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:35:08.233695 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:35:08.233708 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:35:08.233721 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:35:08.233734 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:35:08.233746 | orchestrator | 2026-04-17 00:35:08.233759 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:35:08.233772 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233786 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233799 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233836 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233850 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233863 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233875 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:35:08.233888 | orchestrator | 2026-04-17 00:35:08.233901 | orchestrator | 2026-04-17 00:35:08.233914 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:35:08.233926 | orchestrator | Friday 17 April 2026 00:35:07 +0000 (0:00:01.615) 0:00:17.178 ********** 2026-04-17 00:35:08.233939 | orchestrator | =============================================================================== 2026-04-17 00:35:08.233952 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.35s 2026-04-17 00:35:08.233964 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.67s 2026-04-17 00:35:08.233976 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-04-17 00:35:08.233988 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.24s 2026-04-17 00:35:08.234001 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.02s 2026-04-17 00:35:08.405215 | orchestrator | + osism apply network 2026-04-17 00:35:19.835531 | orchestrator | 2026-04-17 00:35:19 | INFO  | Prepare task for execution of network. 2026-04-17 00:35:19.902597 | orchestrator | 2026-04-17 00:35:19 | INFO  | Task 31677110-837b-45a1-8f73-98b86e4eaa11 (network) was prepared for execution. 2026-04-17 00:35:19.902692 | orchestrator | 2026-04-17 00:35:19 | INFO  | It takes a moment until task 31677110-837b-45a1-8f73-98b86e4eaa11 (network) has been started and output is visible here. 2026-04-17 00:35:46.945518 | orchestrator | 2026-04-17 00:35:46.945629 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-17 00:35:46.945646 | orchestrator | 2026-04-17 00:35:46.945659 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-17 00:35:46.945671 | orchestrator | Friday 17 April 2026 00:35:22 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-04-17 00:35:46.945683 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.945696 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.945725 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.945737 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.945748 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.945759 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.945770 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.945781 | orchestrator | 2026-04-17 00:35:46.945793 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-17 00:35:46.945804 | orchestrator | Friday 17 April 2026 00:35:23 +0000 (0:00:00.550) 0:00:00.834 ********** 2026-04-17 00:35:46.945817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:35:46.945831 | orchestrator | 2026-04-17 00:35:46.945843 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-17 00:35:46.945854 | orchestrator | Friday 17 April 2026 00:35:24 +0000 (0:00:01.026) 0:00:01.861 ********** 2026-04-17 00:35:46.945865 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.945877 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.945887 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.945898 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.945936 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.945947 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.945958 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.945969 | orchestrator | 2026-04-17 00:35:46.945980 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-17 00:35:46.945991 | orchestrator | Friday 17 April 2026 00:35:26 +0000 (0:00:02.528) 0:00:04.390 ********** 2026-04-17 00:35:46.946002 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.946121 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.946149 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.946168 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.946186 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.946204 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.946223 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.946243 | orchestrator | 2026-04-17 00:35:46.946262 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-17 00:35:46.946281 | orchestrator | Friday 17 April 2026 00:35:28 +0000 (0:00:01.564) 0:00:05.955 ********** 2026-04-17 00:35:46.946292 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-17 00:35:46.946304 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-17 00:35:46.946315 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-17 00:35:46.946325 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-17 00:35:46.946336 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-17 00:35:46.946347 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-17 00:35:46.946358 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-17 00:35:46.946369 | orchestrator | 2026-04-17 00:35:46.946380 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-17 00:35:46.946392 | orchestrator | Friday 17 April 2026 00:35:29 +0000 (0:00:01.049) 0:00:07.004 ********** 2026-04-17 00:35:46.946403 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:35:46.946415 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:35:46.946426 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:35:46.946436 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:35:46.946447 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:35:46.946458 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:35:46.946468 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:35:46.946479 | orchestrator | 2026-04-17 00:35:46.946490 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-17 00:35:46.946510 | orchestrator | Friday 17 April 2026 00:35:30 +0000 (0:00:00.561) 0:00:07.565 ********** 2026-04-17 00:35:46.946521 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:35:46.946532 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:35:46.946543 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:35:46.946553 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:35:46.946564 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:35:46.946575 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:35:46.946585 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:35:46.946596 | orchestrator | 2026-04-17 00:35:46.946607 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-17 00:35:46.946618 | orchestrator | Friday 17 April 2026 00:35:30 +0000 (0:00:00.740) 0:00:08.306 ********** 2026-04-17 00:35:46.946628 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:35:46.946639 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:35:46.946649 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:35:46.946660 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:35:46.946671 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:35:46.946681 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:35:46.946692 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:35:46.946703 | orchestrator | 2026-04-17 00:35:46.946713 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-17 00:35:46.946724 | orchestrator | Friday 17 April 2026 00:35:31 +0000 (0:00:00.658) 0:00:08.964 ********** 2026-04-17 00:35:46.946747 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 00:35:46.946758 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:35:46.946768 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:35:46.946779 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 00:35:46.946790 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 00:35:46.946801 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 00:35:46.946811 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 00:35:46.946822 | orchestrator | 2026-04-17 00:35:46.946856 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-17 00:35:46.946868 | orchestrator | Friday 17 April 2026 00:35:34 +0000 (0:00:02.981) 0:00:11.946 ********** 2026-04-17 00:35:46.946879 | orchestrator | changed: [testbed-manager] 2026-04-17 00:35:46.946890 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:35:46.946901 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:35:46.946912 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:35:46.946923 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:35:46.946933 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:35:46.946944 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:35:46.946955 | orchestrator | 2026-04-17 00:35:46.946966 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-17 00:35:46.946976 | orchestrator | Friday 17 April 2026 00:35:36 +0000 (0:00:01.588) 0:00:13.535 ********** 2026-04-17 00:35:46.946987 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:35:46.946998 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:35:46.947009 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 00:35:46.947020 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 00:35:46.947030 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 00:35:46.947074 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 00:35:46.947085 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 00:35:46.947096 | orchestrator | 2026-04-17 00:35:46.947107 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-17 00:35:46.947118 | orchestrator | Friday 17 April 2026 00:35:37 +0000 (0:00:01.483) 0:00:15.019 ********** 2026-04-17 00:35:46.947129 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.947140 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.947151 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.947162 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.947172 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.947183 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.947194 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.947204 | orchestrator | 2026-04-17 00:35:46.947215 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-17 00:35:46.947226 | orchestrator | Friday 17 April 2026 00:35:38 +0000 (0:00:01.021) 0:00:16.040 ********** 2026-04-17 00:35:46.947237 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:35:46.947248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:35:46.947259 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:35:46.947269 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:35:46.947280 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:35:46.947291 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:35:46.947302 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:35:46.947312 | orchestrator | 2026-04-17 00:35:46.947323 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-17 00:35:46.947334 | orchestrator | Friday 17 April 2026 00:35:39 +0000 (0:00:00.563) 0:00:16.603 ********** 2026-04-17 00:35:46.947345 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.947356 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.947367 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.947377 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.947388 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.947399 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.947417 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.947442 | orchestrator | 2026-04-17 00:35:46.947453 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-17 00:35:46.947464 | orchestrator | Friday 17 April 2026 00:35:41 +0000 (0:00:02.200) 0:00:18.804 ********** 2026-04-17 00:35:46.947475 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:35:46.947486 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:35:46.947497 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:35:46.947508 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:35:46.947519 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:35:46.947529 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:35:46.947540 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-17 00:35:46.947552 | orchestrator | 2026-04-17 00:35:46.947563 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-17 00:35:46.947574 | orchestrator | Friday 17 April 2026 00:35:42 +0000 (0:00:00.801) 0:00:19.605 ********** 2026-04-17 00:35:46.947585 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.947601 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:35:46.947612 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:35:46.947623 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:35:46.947634 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:35:46.947645 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:35:46.947656 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:35:46.947667 | orchestrator | 2026-04-17 00:35:46.947678 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-17 00:35:46.947689 | orchestrator | Friday 17 April 2026 00:35:43 +0000 (0:00:01.556) 0:00:21.162 ********** 2026-04-17 00:35:46.947700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:35:46.947713 | orchestrator | 2026-04-17 00:35:46.947724 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-17 00:35:46.947735 | orchestrator | Friday 17 April 2026 00:35:44 +0000 (0:00:01.186) 0:00:22.348 ********** 2026-04-17 00:35:46.947746 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.947757 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.947767 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.947778 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.947789 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:35:46.947800 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:35:46.947810 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.947821 | orchestrator | 2026-04-17 00:35:46.947832 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-17 00:35:46.947843 | orchestrator | Friday 17 April 2026 00:35:46 +0000 (0:00:01.549) 0:00:23.898 ********** 2026-04-17 00:35:46.947854 | orchestrator | ok: [testbed-manager] 2026-04-17 00:35:46.947864 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:35:46.947875 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:35:46.947886 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:35:46.947896 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:35:46.947915 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:36:01.690422 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:36:01.690533 | orchestrator | 2026-04-17 00:36:01.690552 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-17 00:36:01.690565 | orchestrator | Friday 17 April 2026 00:35:47 +0000 (0:00:00.566) 0:00:24.464 ********** 2026-04-17 00:36:01.690577 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690589 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690600 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690611 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690647 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690659 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690670 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690680 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690691 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690702 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690713 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-17 00:36:01.690724 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690734 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690765 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-17 00:36:01.690787 | orchestrator | 2026-04-17 00:36:01.690798 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-17 00:36:01.690809 | orchestrator | Friday 17 April 2026 00:35:48 +0000 (0:00:01.074) 0:00:25.538 ********** 2026-04-17 00:36:01.690819 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:01.690831 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:01.690842 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:01.690853 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:01.690863 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:01.690874 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:01.690885 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:01.690896 | orchestrator | 2026-04-17 00:36:01.690907 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-17 00:36:01.690918 | orchestrator | Friday 17 April 2026 00:35:48 +0000 (0:00:00.543) 0:00:26.082 ********** 2026-04-17 00:36:01.690932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-04-17 00:36:01.690945 | orchestrator | 2026-04-17 00:36:01.690956 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-17 00:36:01.690970 | orchestrator | Friday 17 April 2026 00:35:52 +0000 (0:00:03.780) 0:00:29.863 ********** 2026-04-17 00:36:01.690985 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-17 00:36:01.691035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691050 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-17 00:36:01.691063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-17 00:36:01.691132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-17 00:36:01.691194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-17 00:36:01.691208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-17 00:36:01.691222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-17 00:36:01.691235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-17 00:36:01.691248 | orchestrator | 2026-04-17 00:36:01.691261 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-17 00:36:01.691274 | orchestrator | Friday 17 April 2026 00:35:57 +0000 (0:00:04.877) 0:00:34.740 ********** 2026-04-17 00:36:01.691287 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-17 00:36:01.691300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-17 00:36:01.691392 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-17 00:36:12.817907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-17 00:36:12.818143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-17 00:36:12.818169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-17 00:36:12.818182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-17 00:36:12.818194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-17 00:36:12.818205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-17 00:36:12.818218 | orchestrator | 2026-04-17 00:36:12.818231 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-17 00:36:12.818243 | orchestrator | Friday 17 April 2026 00:36:02 +0000 (0:00:05.205) 0:00:39.946 ********** 2026-04-17 00:36:12.818256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:36:12.818268 | orchestrator | 2026-04-17 00:36:12.818280 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-17 00:36:12.818292 | orchestrator | Friday 17 April 2026 00:36:03 +0000 (0:00:01.316) 0:00:41.263 ********** 2026-04-17 00:36:12.818303 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:12.818316 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:36:12.818327 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:36:12.818338 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:36:12.818349 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:36:12.818360 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:36:12.818392 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:36:12.818404 | orchestrator | 2026-04-17 00:36:12.818415 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-17 00:36:12.818435 | orchestrator | Friday 17 April 2026 00:36:04 +0000 (0:00:00.944) 0:00:42.207 ********** 2026-04-17 00:36:12.818448 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818461 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818473 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818486 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818499 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:12.818513 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818526 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818538 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818550 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818562 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:12.818575 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818587 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818599 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818612 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818624 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818636 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818649 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818679 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818692 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:12.818705 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818717 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818730 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818742 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818754 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:12.818766 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818779 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818790 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818801 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818813 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:12.818824 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:12.818836 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-17 00:36:12.818847 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-17 00:36:12.818858 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-17 00:36:12.818869 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-17 00:36:12.818880 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:12.818899 | orchestrator | 2026-04-17 00:36:12.818910 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-17 00:36:12.818921 | orchestrator | Friday 17 April 2026 00:36:05 +0000 (0:00:00.908) 0:00:43.115 ********** 2026-04-17 00:36:12.818933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:36:12.818944 | orchestrator | 2026-04-17 00:36:12.818955 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-17 00:36:12.818967 | orchestrator | Friday 17 April 2026 00:36:06 +0000 (0:00:01.178) 0:00:44.294 ********** 2026-04-17 00:36:12.818978 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:12.818989 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:12.819032 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:12.819050 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:12.819070 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:12.819088 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:12.819106 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:12.819117 | orchestrator | 2026-04-17 00:36:12.819128 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-17 00:36:12.819139 | orchestrator | Friday 17 April 2026 00:36:07 +0000 (0:00:00.592) 0:00:44.887 ********** 2026-04-17 00:36:12.819150 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:12.819161 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:12.819172 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:12.819183 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:12.819194 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:12.819204 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:12.819215 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:12.819226 | orchestrator | 2026-04-17 00:36:12.819237 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-17 00:36:12.819248 | orchestrator | Friday 17 April 2026 00:36:08 +0000 (0:00:00.780) 0:00:45.667 ********** 2026-04-17 00:36:12.819259 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:12.819270 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:12.819281 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:12.819292 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:12.819302 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:12.819313 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:12.819324 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:12.819335 | orchestrator | 2026-04-17 00:36:12.819346 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-17 00:36:12.819357 | orchestrator | Friday 17 April 2026 00:36:08 +0000 (0:00:00.585) 0:00:46.252 ********** 2026-04-17 00:36:12.819368 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:36:12.819379 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:36:12.819390 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:12.819401 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:36:12.819412 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:36:12.819423 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:36:12.819433 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:36:12.819444 | orchestrator | 2026-04-17 00:36:12.819456 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-17 00:36:12.819467 | orchestrator | Friday 17 April 2026 00:36:10 +0000 (0:00:01.726) 0:00:47.979 ********** 2026-04-17 00:36:12.819478 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:12.819489 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:36:12.819499 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:36:12.819510 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:36:12.819521 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:36:12.819531 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:36:12.819542 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:36:12.819553 | orchestrator | 2026-04-17 00:36:12.819564 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-17 00:36:12.819582 | orchestrator | Friday 17 April 2026 00:36:11 +0000 (0:00:01.147) 0:00:49.127 ********** 2026-04-17 00:36:12.819593 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:12.819604 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:36:12.819615 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:36:12.819625 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:36:12.819636 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:36:12.819647 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:36:12.819666 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:36:15.202559 | orchestrator | 2026-04-17 00:36:15.202662 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-17 00:36:15.202678 | orchestrator | Friday 17 April 2026 00:36:13 +0000 (0:00:02.082) 0:00:51.209 ********** 2026-04-17 00:36:15.202689 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:15.202703 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:15.202738 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:15.202749 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:15.202760 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:15.202772 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:15.202782 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:15.202794 | orchestrator | 2026-04-17 00:36:15.202805 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-17 00:36:15.202817 | orchestrator | Friday 17 April 2026 00:36:14 +0000 (0:00:00.687) 0:00:51.897 ********** 2026-04-17 00:36:15.202828 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:36:15.202839 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:36:15.202850 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:36:15.202861 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:36:15.202872 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:36:15.202883 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:36:15.202893 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:36:15.202905 | orchestrator | 2026-04-17 00:36:15.202916 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:36:15.202932 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 00:36:15.202952 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.202971 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.203019 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.203040 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.203052 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.203085 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:36:15.203096 | orchestrator | 2026-04-17 00:36:15.203110 | orchestrator | 2026-04-17 00:36:15.203127 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:36:15.203141 | orchestrator | Friday 17 April 2026 00:36:14 +0000 (0:00:00.478) 0:00:52.375 ********** 2026-04-17 00:36:15.203153 | orchestrator | =============================================================================== 2026-04-17 00:36:15.203165 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2026-04-17 00:36:15.203178 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.88s 2026-04-17 00:36:15.203215 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.78s 2026-04-17 00:36:15.203233 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.98s 2026-04-17 00:36:15.203246 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.53s 2026-04-17 00:36:15.203258 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2026-04-17 00:36:15.203271 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.08s 2026-04-17 00:36:15.203283 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.73s 2026-04-17 00:36:15.203294 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2026-04-17 00:36:15.203304 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.57s 2026-04-17 00:36:15.203315 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2026-04-17 00:36:15.203326 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.55s 2026-04-17 00:36:15.203336 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.48s 2026-04-17 00:36:15.203347 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2026-04-17 00:36:15.203357 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2026-04-17 00:36:15.203368 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.18s 2026-04-17 00:36:15.203379 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.15s 2026-04-17 00:36:15.203389 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.07s 2026-04-17 00:36:15.203400 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2026-04-17 00:36:15.203411 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.03s 2026-04-17 00:36:15.329073 | orchestrator | + osism apply wireguard 2026-04-17 00:36:26.577927 | orchestrator | 2026-04-17 00:36:26 | INFO  | Prepare task for execution of wireguard. 2026-04-17 00:36:26.642898 | orchestrator | 2026-04-17 00:36:26 | INFO  | Task cf4aa310-8e38-4fd0-9cda-08e9b1d72287 (wireguard) was prepared for execution. 2026-04-17 00:36:26.644543 | orchestrator | 2026-04-17 00:36:26 | INFO  | It takes a moment until task cf4aa310-8e38-4fd0-9cda-08e9b1d72287 (wireguard) has been started and output is visible here. 2026-04-17 00:36:44.540200 | orchestrator | 2026-04-17 00:36:44.540313 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-17 00:36:44.540331 | orchestrator | 2026-04-17 00:36:44.540344 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-17 00:36:44.540356 | orchestrator | Friday 17 April 2026 00:36:29 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-04-17 00:36:44.540367 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:44.540380 | orchestrator | 2026-04-17 00:36:44.540391 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-17 00:36:44.540403 | orchestrator | Friday 17 April 2026 00:36:31 +0000 (0:00:01.745) 0:00:02.011 ********** 2026-04-17 00:36:44.540414 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540426 | orchestrator | 2026-04-17 00:36:44.540437 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-17 00:36:44.540448 | orchestrator | Friday 17 April 2026 00:36:37 +0000 (0:00:06.123) 0:00:08.134 ********** 2026-04-17 00:36:44.540459 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540470 | orchestrator | 2026-04-17 00:36:44.540481 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-17 00:36:44.540492 | orchestrator | Friday 17 April 2026 00:36:38 +0000 (0:00:00.543) 0:00:08.678 ********** 2026-04-17 00:36:44.540503 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540513 | orchestrator | 2026-04-17 00:36:44.540524 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-17 00:36:44.540562 | orchestrator | Friday 17 April 2026 00:36:38 +0000 (0:00:00.412) 0:00:09.090 ********** 2026-04-17 00:36:44.540574 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:44.540584 | orchestrator | 2026-04-17 00:36:44.540596 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-17 00:36:44.540606 | orchestrator | Friday 17 April 2026 00:36:38 +0000 (0:00:00.542) 0:00:09.633 ********** 2026-04-17 00:36:44.540617 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:44.540628 | orchestrator | 2026-04-17 00:36:44.540639 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-17 00:36:44.540650 | orchestrator | Friday 17 April 2026 00:36:39 +0000 (0:00:00.416) 0:00:10.050 ********** 2026-04-17 00:36:44.540660 | orchestrator | ok: [testbed-manager] 2026-04-17 00:36:44.540671 | orchestrator | 2026-04-17 00:36:44.540682 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-17 00:36:44.540693 | orchestrator | Friday 17 April 2026 00:36:39 +0000 (0:00:00.403) 0:00:10.454 ********** 2026-04-17 00:36:44.540704 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540714 | orchestrator | 2026-04-17 00:36:44.540725 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-17 00:36:44.540736 | orchestrator | Friday 17 April 2026 00:36:40 +0000 (0:00:01.173) 0:00:11.627 ********** 2026-04-17 00:36:44.540747 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-17 00:36:44.540759 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540770 | orchestrator | 2026-04-17 00:36:44.540781 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-17 00:36:44.540791 | orchestrator | Friday 17 April 2026 00:36:41 +0000 (0:00:00.906) 0:00:12.533 ********** 2026-04-17 00:36:44.540802 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540813 | orchestrator | 2026-04-17 00:36:44.540824 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-17 00:36:44.540834 | orchestrator | Friday 17 April 2026 00:36:43 +0000 (0:00:01.690) 0:00:14.224 ********** 2026-04-17 00:36:44.540846 | orchestrator | changed: [testbed-manager] 2026-04-17 00:36:44.540856 | orchestrator | 2026-04-17 00:36:44.540883 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:36:44.540895 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:36:44.540907 | orchestrator | 2026-04-17 00:36:44.540918 | orchestrator | 2026-04-17 00:36:44.540929 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:36:44.540940 | orchestrator | Friday 17 April 2026 00:36:44 +0000 (0:00:00.835) 0:00:15.060 ********** 2026-04-17 00:36:44.541006 | orchestrator | =============================================================================== 2026-04-17 00:36:44.541017 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.12s 2026-04-17 00:36:44.541028 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2026-04-17 00:36:44.541039 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-04-17 00:36:44.541049 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-04-17 00:36:44.541060 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-04-17 00:36:44.541071 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-04-17 00:36:44.541082 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-04-17 00:36:44.541092 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-04-17 00:36:44.541103 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-04-17 00:36:44.541114 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-04-17 00:36:44.541125 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-04-17 00:36:44.651656 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-17 00:36:44.683322 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-17 00:36:44.683416 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-17 00:36:44.753882 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 210 0 --:--:-- --:--:-- --:--:-- 214 2026-04-17 00:36:44.766267 | orchestrator | + osism apply --environment custom workarounds 2026-04-17 00:36:45.869137 | orchestrator | 2026-04-17 00:36:45 | INFO  | Trying to run play workarounds in environment custom 2026-04-17 00:36:55.997993 | orchestrator | 2026-04-17 00:36:55 | INFO  | Prepare task for execution of workarounds. 2026-04-17 00:36:56.066480 | orchestrator | 2026-04-17 00:36:56 | INFO  | Task fd05f032-65f9-4ee5-bb81-c821b0e14a5c (workarounds) was prepared for execution. 2026-04-17 00:36:56.066581 | orchestrator | 2026-04-17 00:36:56 | INFO  | It takes a moment until task fd05f032-65f9-4ee5-bb81-c821b0e14a5c (workarounds) has been started and output is visible here. 2026-04-17 00:37:20.211942 | orchestrator | 2026-04-17 00:37:20.212055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:37:20.212072 | orchestrator | 2026-04-17 00:37:20.212084 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-17 00:37:20.212096 | orchestrator | Friday 17 April 2026 00:36:59 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-04-17 00:37:20.212108 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212120 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212131 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212142 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212153 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212164 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212174 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-17 00:37:20.212185 | orchestrator | 2026-04-17 00:37:20.212196 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-17 00:37:20.212207 | orchestrator | 2026-04-17 00:37:20.212219 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-17 00:37:20.212230 | orchestrator | Friday 17 April 2026 00:36:59 +0000 (0:00:00.721) 0:00:00.897 ********** 2026-04-17 00:37:20.212242 | orchestrator | ok: [testbed-manager] 2026-04-17 00:37:20.212254 | orchestrator | 2026-04-17 00:37:20.212265 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-17 00:37:20.212276 | orchestrator | 2026-04-17 00:37:20.212287 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-17 00:37:20.212298 | orchestrator | Friday 17 April 2026 00:37:02 +0000 (0:00:02.748) 0:00:03.645 ********** 2026-04-17 00:37:20.212309 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:37:20.212320 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:37:20.212331 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:37:20.212342 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:37:20.212353 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:37:20.212364 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:37:20.212375 | orchestrator | 2026-04-17 00:37:20.212386 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-17 00:37:20.212397 | orchestrator | 2026-04-17 00:37:20.212408 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-17 00:37:20.212419 | orchestrator | Friday 17 April 2026 00:37:05 +0000 (0:00:02.312) 0:00:05.957 ********** 2026-04-17 00:37:20.212447 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212461 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212494 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212507 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212520 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212532 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-17 00:37:20.212544 | orchestrator | 2026-04-17 00:37:20.212557 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-17 00:37:20.212569 | orchestrator | Friday 17 April 2026 00:37:06 +0000 (0:00:01.318) 0:00:07.276 ********** 2026-04-17 00:37:20.212582 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:37:20.212610 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:37:20.212623 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:37:20.212647 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:37:20.212670 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:37:20.212682 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:37:20.212695 | orchestrator | 2026-04-17 00:37:20.212707 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-17 00:37:20.212720 | orchestrator | Friday 17 April 2026 00:37:10 +0000 (0:00:03.917) 0:00:11.193 ********** 2026-04-17 00:37:20.212732 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:37:20.212744 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:37:20.212757 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:37:20.212769 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:37:20.212782 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:37:20.212794 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:37:20.212806 | orchestrator | 2026-04-17 00:37:20.212819 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-17 00:37:20.212832 | orchestrator | 2026-04-17 00:37:20.212843 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-17 00:37:20.212854 | orchestrator | Friday 17 April 2026 00:37:10 +0000 (0:00:00.517) 0:00:11.711 ********** 2026-04-17 00:37:20.212865 | orchestrator | changed: [testbed-manager] 2026-04-17 00:37:20.212876 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:37:20.212887 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:37:20.212951 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:37:20.212963 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:37:20.212974 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:37:20.212985 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:37:20.212996 | orchestrator | 2026-04-17 00:37:20.213007 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-17 00:37:20.213018 | orchestrator | Friday 17 April 2026 00:37:12 +0000 (0:00:01.627) 0:00:13.338 ********** 2026-04-17 00:37:20.213029 | orchestrator | changed: [testbed-manager] 2026-04-17 00:37:20.213039 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:37:20.213050 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:37:20.213061 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:37:20.213072 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:37:20.213083 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:37:20.213113 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:37:20.213125 | orchestrator | 2026-04-17 00:37:20.213136 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-17 00:37:20.213147 | orchestrator | Friday 17 April 2026 00:37:13 +0000 (0:00:01.462) 0:00:14.801 ********** 2026-04-17 00:37:20.213158 | orchestrator | ok: [testbed-manager] 2026-04-17 00:37:20.213169 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:37:20.213179 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:37:20.213190 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:37:20.213201 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:37:20.213212 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:37:20.213231 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:37:20.213242 | orchestrator | 2026-04-17 00:37:20.213253 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-17 00:37:20.213264 | orchestrator | Friday 17 April 2026 00:37:15 +0000 (0:00:01.585) 0:00:16.386 ********** 2026-04-17 00:37:20.213275 | orchestrator | changed: [testbed-manager] 2026-04-17 00:37:20.213286 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:37:20.213296 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:37:20.213307 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:37:20.213318 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:37:20.213329 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:37:20.213340 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:37:20.213351 | orchestrator | 2026-04-17 00:37:20.213362 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-17 00:37:20.213373 | orchestrator | Friday 17 April 2026 00:37:16 +0000 (0:00:01.494) 0:00:17.881 ********** 2026-04-17 00:37:20.213384 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:37:20.213395 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:37:20.213405 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:37:20.213416 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:37:20.213427 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:37:20.213438 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:37:20.213448 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:37:20.213459 | orchestrator | 2026-04-17 00:37:20.213471 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-17 00:37:20.213482 | orchestrator | 2026-04-17 00:37:20.213493 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-17 00:37:20.213504 | orchestrator | Friday 17 April 2026 00:37:17 +0000 (0:00:00.644) 0:00:18.525 ********** 2026-04-17 00:37:20.213515 | orchestrator | ok: [testbed-manager] 2026-04-17 00:37:20.213526 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:37:20.213537 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:37:20.213547 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:37:20.213558 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:37:20.213569 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:37:20.213580 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:37:20.213590 | orchestrator | 2026-04-17 00:37:20.213607 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:37:20.213619 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:37:20.213631 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213642 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213653 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213664 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213675 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213686 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:20.213697 | orchestrator | 2026-04-17 00:37:20.213708 | orchestrator | 2026-04-17 00:37:20.213719 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:37:20.213730 | orchestrator | Friday 17 April 2026 00:37:20 +0000 (0:00:02.607) 0:00:21.133 ********** 2026-04-17 00:37:20.213748 | orchestrator | =============================================================================== 2026-04-17 00:37:20.213758 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2026-04-17 00:37:20.213769 | orchestrator | Apply netplan configuration --------------------------------------------- 2.75s 2026-04-17 00:37:20.213780 | orchestrator | Install python3-docker -------------------------------------------------- 2.61s 2026-04-17 00:37:20.213791 | orchestrator | Apply netplan configuration --------------------------------------------- 2.31s 2026-04-17 00:37:20.213802 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.63s 2026-04-17 00:37:20.213812 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2026-04-17 00:37:20.213823 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.49s 2026-04-17 00:37:20.213834 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.46s 2026-04-17 00:37:20.213845 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.32s 2026-04-17 00:37:20.213856 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.72s 2026-04-17 00:37:20.213867 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-04-17 00:37:20.213884 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.52s 2026-04-17 00:37:20.538875 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-17 00:37:31.683022 | orchestrator | 2026-04-17 00:37:31 | INFO  | Prepare task for execution of reboot. 2026-04-17 00:37:31.756942 | orchestrator | 2026-04-17 00:37:31 | INFO  | Task 2d73563f-7093-443b-991c-66707fd4b51d (reboot) was prepared for execution. 2026-04-17 00:37:31.757045 | orchestrator | 2026-04-17 00:37:31 | INFO  | It takes a moment until task 2d73563f-7093-443b-991c-66707fd4b51d (reboot) has been started and output is visible here. 2026-04-17 00:37:42.853763 | orchestrator | 2026-04-17 00:37:42.853942 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.853967 | orchestrator | 2026-04-17 00:37:42.853977 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.853988 | orchestrator | Friday 17 April 2026 00:37:34 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-04-17 00:37:42.854002 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:37:42.854084 | orchestrator | 2026-04-17 00:37:42.854103 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.854119 | orchestrator | Friday 17 April 2026 00:37:35 +0000 (0:00:00.142) 0:00:00.380 ********** 2026-04-17 00:37:42.854134 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:37:42.854149 | orchestrator | 2026-04-17 00:37:42.854166 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.854183 | orchestrator | Friday 17 April 2026 00:37:36 +0000 (0:00:01.231) 0:00:01.612 ********** 2026-04-17 00:37:42.854200 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:37:42.854216 | orchestrator | 2026-04-17 00:37:42.854231 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.854240 | orchestrator | 2026-04-17 00:37:42.854249 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.854258 | orchestrator | Friday 17 April 2026 00:37:36 +0000 (0:00:00.102) 0:00:01.714 ********** 2026-04-17 00:37:42.854267 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:37:42.854276 | orchestrator | 2026-04-17 00:37:42.854285 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.854294 | orchestrator | Friday 17 April 2026 00:37:36 +0000 (0:00:00.104) 0:00:01.819 ********** 2026-04-17 00:37:42.854304 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:37:42.854314 | orchestrator | 2026-04-17 00:37:42.854324 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.854348 | orchestrator | Friday 17 April 2026 00:37:37 +0000 (0:00:01.042) 0:00:02.862 ********** 2026-04-17 00:37:42.854378 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:37:42.854388 | orchestrator | 2026-04-17 00:37:42.854398 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.854408 | orchestrator | 2026-04-17 00:37:42.854419 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.854430 | orchestrator | Friday 17 April 2026 00:37:37 +0000 (0:00:00.103) 0:00:02.966 ********** 2026-04-17 00:37:42.854439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:37:42.854449 | orchestrator | 2026-04-17 00:37:42.854459 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.854469 | orchestrator | Friday 17 April 2026 00:37:37 +0000 (0:00:00.110) 0:00:03.076 ********** 2026-04-17 00:37:42.854479 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:37:42.854488 | orchestrator | 2026-04-17 00:37:42.854498 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.854508 | orchestrator | Friday 17 April 2026 00:37:38 +0000 (0:00:01.006) 0:00:04.083 ********** 2026-04-17 00:37:42.854518 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:37:42.854527 | orchestrator | 2026-04-17 00:37:42.854537 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.854547 | orchestrator | 2026-04-17 00:37:42.854557 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.854567 | orchestrator | Friday 17 April 2026 00:37:38 +0000 (0:00:00.112) 0:00:04.195 ********** 2026-04-17 00:37:42.854577 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:37:42.854586 | orchestrator | 2026-04-17 00:37:42.854596 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.854606 | orchestrator | Friday 17 April 2026 00:37:38 +0000 (0:00:00.094) 0:00:04.290 ********** 2026-04-17 00:37:42.854616 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:37:42.854626 | orchestrator | 2026-04-17 00:37:42.854635 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.854646 | orchestrator | Friday 17 April 2026 00:37:40 +0000 (0:00:01.034) 0:00:05.325 ********** 2026-04-17 00:37:42.854656 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:37:42.854665 | orchestrator | 2026-04-17 00:37:42.854675 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.854684 | orchestrator | 2026-04-17 00:37:42.854695 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.854705 | orchestrator | Friday 17 April 2026 00:37:40 +0000 (0:00:00.103) 0:00:05.428 ********** 2026-04-17 00:37:42.854719 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:37:42.854734 | orchestrator | 2026-04-17 00:37:42.854756 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.854772 | orchestrator | Friday 17 April 2026 00:37:40 +0000 (0:00:00.097) 0:00:05.525 ********** 2026-04-17 00:37:42.854786 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:37:42.854799 | orchestrator | 2026-04-17 00:37:42.854813 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.854827 | orchestrator | Friday 17 April 2026 00:37:41 +0000 (0:00:01.105) 0:00:06.631 ********** 2026-04-17 00:37:42.854841 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:37:42.854879 | orchestrator | 2026-04-17 00:37:42.854894 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-17 00:37:42.854909 | orchestrator | 2026-04-17 00:37:42.854925 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-17 00:37:42.854940 | orchestrator | Friday 17 April 2026 00:37:41 +0000 (0:00:00.106) 0:00:06.738 ********** 2026-04-17 00:37:42.854954 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:37:42.854969 | orchestrator | 2026-04-17 00:37:42.854990 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-17 00:37:42.855008 | orchestrator | Friday 17 April 2026 00:37:41 +0000 (0:00:00.104) 0:00:06.843 ********** 2026-04-17 00:37:42.855022 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:37:42.855050 | orchestrator | 2026-04-17 00:37:42.855062 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-17 00:37:42.855077 | orchestrator | Friday 17 April 2026 00:37:42 +0000 (0:00:01.053) 0:00:07.896 ********** 2026-04-17 00:37:42.855116 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:37:42.855132 | orchestrator | 2026-04-17 00:37:42.855147 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:37:42.855163 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855179 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855194 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855208 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855223 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855237 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:37:42.855251 | orchestrator | 2026-04-17 00:37:42.855265 | orchestrator | 2026-04-17 00:37:42.855279 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:37:42.855292 | orchestrator | Friday 17 April 2026 00:37:42 +0000 (0:00:00.035) 0:00:07.932 ********** 2026-04-17 00:37:42.855315 | orchestrator | =============================================================================== 2026-04-17 00:37:42.855330 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.47s 2026-04-17 00:37:42.855345 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2026-04-17 00:37:42.855360 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-04-17 00:37:43.011409 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-17 00:37:54.357445 | orchestrator | 2026-04-17 00:37:54 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-17 00:37:54.430651 | orchestrator | 2026-04-17 00:37:54 | INFO  | Task 4da1c12e-0bdf-450c-9aec-5e634fe83761 (wait-for-connection) was prepared for execution. 2026-04-17 00:37:54.430736 | orchestrator | 2026-04-17 00:37:54 | INFO  | It takes a moment until task 4da1c12e-0bdf-450c-9aec-5e634fe83761 (wait-for-connection) has been started and output is visible here. 2026-04-17 00:38:09.456687 | orchestrator | 2026-04-17 00:38:09.456874 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-17 00:38:09.456904 | orchestrator | 2026-04-17 00:38:09.456918 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-17 00:38:09.456930 | orchestrator | Friday 17 April 2026 00:37:57 +0000 (0:00:00.321) 0:00:00.321 ********** 2026-04-17 00:38:09.456941 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:38:09.456953 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:38:09.456964 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:38:09.456975 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:38:09.456986 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:38:09.456997 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:38:09.457009 | orchestrator | 2026-04-17 00:38:09.457027 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:38:09.457047 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457082 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457141 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457164 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457184 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457203 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:09.457217 | orchestrator | 2026-04-17 00:38:09.457230 | orchestrator | 2026-04-17 00:38:09.457243 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:38:09.457256 | orchestrator | Friday 17 April 2026 00:38:09 +0000 (0:00:11.524) 0:00:11.846 ********** 2026-04-17 00:38:09.457269 | orchestrator | =============================================================================== 2026-04-17 00:38:09.457282 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-04-17 00:38:09.613910 | orchestrator | + osism apply hddtemp 2026-04-17 00:38:20.973473 | orchestrator | 2026-04-17 00:38:20 | INFO  | Prepare task for execution of hddtemp. 2026-04-17 00:38:21.039082 | orchestrator | 2026-04-17 00:38:21 | INFO  | Task cb056222-0da8-434e-85cb-599386b38269 (hddtemp) was prepared for execution. 2026-04-17 00:38:21.039172 | orchestrator | 2026-04-17 00:38:21 | INFO  | It takes a moment until task cb056222-0da8-434e-85cb-599386b38269 (hddtemp) has been started and output is visible here. 2026-04-17 00:38:48.591704 | orchestrator | 2026-04-17 00:38:48.591839 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-17 00:38:48.591856 | orchestrator | 2026-04-17 00:38:48.591869 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-17 00:38:48.591881 | orchestrator | Friday 17 April 2026 00:38:24 +0000 (0:00:00.345) 0:00:00.345 ********** 2026-04-17 00:38:48.591892 | orchestrator | ok: [testbed-manager] 2026-04-17 00:38:48.591904 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:38:48.591915 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:38:48.591926 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:38:48.591937 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:38:48.591948 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:38:48.591959 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:38:48.591970 | orchestrator | 2026-04-17 00:38:48.591981 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-17 00:38:48.591992 | orchestrator | Friday 17 April 2026 00:38:24 +0000 (0:00:00.594) 0:00:00.940 ********** 2026-04-17 00:38:48.592005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:38:48.592019 | orchestrator | 2026-04-17 00:38:48.592031 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-17 00:38:48.592042 | orchestrator | Friday 17 April 2026 00:38:25 +0000 (0:00:01.135) 0:00:02.075 ********** 2026-04-17 00:38:48.592053 | orchestrator | ok: [testbed-manager] 2026-04-17 00:38:48.592064 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:38:48.592075 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:38:48.592086 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:38:48.592112 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:38:48.592124 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:38:48.592135 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:38:48.592146 | orchestrator | 2026-04-17 00:38:48.592157 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-17 00:38:48.592168 | orchestrator | Friday 17 April 2026 00:38:28 +0000 (0:00:02.532) 0:00:04.608 ********** 2026-04-17 00:38:48.592205 | orchestrator | changed: [testbed-manager] 2026-04-17 00:38:48.592218 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:38:48.592230 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:38:48.592243 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:38:48.592255 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:38:48.592268 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:38:48.592281 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:38:48.592293 | orchestrator | 2026-04-17 00:38:48.592305 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-17 00:38:48.592317 | orchestrator | Friday 17 April 2026 00:38:29 +0000 (0:00:00.937) 0:00:05.545 ********** 2026-04-17 00:38:48.592330 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:38:48.592342 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:38:48.592355 | orchestrator | ok: [testbed-manager] 2026-04-17 00:38:48.592366 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:38:48.592377 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:38:48.592388 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:38:48.592399 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:38:48.592410 | orchestrator | 2026-04-17 00:38:48.592421 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-17 00:38:48.592432 | orchestrator | Friday 17 April 2026 00:38:31 +0000 (0:00:02.267) 0:00:07.812 ********** 2026-04-17 00:38:48.592443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:38:48.592454 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:38:48.592465 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:38:48.592476 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:38:48.592487 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:38:48.592498 | orchestrator | changed: [testbed-manager] 2026-04-17 00:38:48.592509 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:38:48.592520 | orchestrator | 2026-04-17 00:38:48.592531 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-17 00:38:48.592542 | orchestrator | Friday 17 April 2026 00:38:32 +0000 (0:00:00.582) 0:00:08.395 ********** 2026-04-17 00:38:48.592553 | orchestrator | changed: [testbed-manager] 2026-04-17 00:38:48.592564 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:38:48.592574 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:38:48.592585 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:38:48.592596 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:38:48.592607 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:38:48.592618 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:38:48.592629 | orchestrator | 2026-04-17 00:38:48.592641 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-17 00:38:48.592652 | orchestrator | Friday 17 April 2026 00:38:45 +0000 (0:00:13.204) 0:00:21.599 ********** 2026-04-17 00:38:48.592663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:38:48.592675 | orchestrator | 2026-04-17 00:38:48.592686 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-17 00:38:48.592697 | orchestrator | Friday 17 April 2026 00:38:46 +0000 (0:00:01.095) 0:00:22.695 ********** 2026-04-17 00:38:48.592708 | orchestrator | changed: [testbed-manager] 2026-04-17 00:38:48.592719 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:38:48.592730 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:38:48.592741 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:38:48.592752 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:38:48.592819 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:38:48.592831 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:38:48.592842 | orchestrator | 2026-04-17 00:38:48.592853 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:38:48.592865 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:38:48.592905 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592917 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592928 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592945 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592964 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592982 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:38:48.592999 | orchestrator | 2026-04-17 00:38:48.593016 | orchestrator | 2026-04-17 00:38:48.593033 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:38:48.593049 | orchestrator | Friday 17 April 2026 00:38:48 +0000 (0:00:01.799) 0:00:24.495 ********** 2026-04-17 00:38:48.593066 | orchestrator | =============================================================================== 2026-04-17 00:38:48.593084 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.20s 2026-04-17 00:38:48.593102 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.53s 2026-04-17 00:38:48.593120 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.27s 2026-04-17 00:38:48.593136 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.80s 2026-04-17 00:38:48.593154 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-04-17 00:38:48.593171 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.10s 2026-04-17 00:38:48.593204 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.94s 2026-04-17 00:38:48.593224 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2026-04-17 00:38:48.593243 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.58s 2026-04-17 00:38:48.713890 | orchestrator | ++ semver latest 7.1.1 2026-04-17 00:38:48.766475 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 00:38:48.766568 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:38:48.766584 | orchestrator | + sudo systemctl restart manager.service 2026-04-17 00:39:02.074317 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 00:39:02.074409 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-17 00:39:02.074421 | orchestrator | + local max_attempts=60 2026-04-17 00:39:02.074430 | orchestrator | + local name=ceph-ansible 2026-04-17 00:39:02.074437 | orchestrator | + local attempt_num=1 2026-04-17 00:39:02.074444 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:02.108271 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:02.108381 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:02.108401 | orchestrator | + sleep 5 2026-04-17 00:39:07.111536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:07.138577 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:07.138676 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:07.138692 | orchestrator | + sleep 5 2026-04-17 00:39:12.141572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:12.178510 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:12.178579 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:12.178586 | orchestrator | + sleep 5 2026-04-17 00:39:17.183041 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:17.226857 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:17.226972 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:17.226986 | orchestrator | + sleep 5 2026-04-17 00:39:22.232037 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:22.274968 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:22.275064 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:22.275081 | orchestrator | + sleep 5 2026-04-17 00:39:27.279049 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:27.310416 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:27.310540 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:27.310567 | orchestrator | + sleep 5 2026-04-17 00:39:32.315015 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:32.349902 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:32.350009 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:32.350092 | orchestrator | + sleep 5 2026-04-17 00:39:37.352906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:37.382323 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:37.382417 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:37.382433 | orchestrator | + sleep 5 2026-04-17 00:39:42.385240 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:42.422865 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:42.422952 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:42.422965 | orchestrator | + sleep 5 2026-04-17 00:39:47.427063 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:47.457894 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:47.457995 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:47.458011 | orchestrator | + sleep 5 2026-04-17 00:39:52.461308 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:52.496650 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:52.496876 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:52.496903 | orchestrator | + sleep 5 2026-04-17 00:39:57.499939 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:39:57.540933 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:39:57.541044 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:39:57.541068 | orchestrator | + sleep 5 2026-04-17 00:40:02.545831 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:40:02.585211 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-17 00:40:02.585299 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-17 00:40:02.585312 | orchestrator | + sleep 5 2026-04-17 00:40:07.589926 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-17 00:40:07.622698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:40:07.622786 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-17 00:40:07.622804 | orchestrator | + local max_attempts=60 2026-04-17 00:40:07.622817 | orchestrator | + local name=kolla-ansible 2026-04-17 00:40:07.622829 | orchestrator | + local attempt_num=1 2026-04-17 00:40:07.623112 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-17 00:40:07.655690 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:40:07.655782 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-17 00:40:07.655803 | orchestrator | + local max_attempts=60 2026-04-17 00:40:07.655815 | orchestrator | + local name=osism-ansible 2026-04-17 00:40:07.655826 | orchestrator | + local attempt_num=1 2026-04-17 00:40:07.656482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-17 00:40:07.694272 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-17 00:40:07.694346 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-17 00:40:07.694360 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-17 00:40:07.835770 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-17 00:40:07.969364 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-17 00:40:08.113461 | orchestrator | ARA in osism-ansible already disabled. 2026-04-17 00:40:08.255518 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-17 00:40:08.255871 | orchestrator | + osism apply gather-facts 2026-04-17 00:40:19.502391 | orchestrator | 2026-04-17 00:40:19 | INFO  | Prepare task for execution of gather-facts. 2026-04-17 00:40:19.565243 | orchestrator | 2026-04-17 00:40:19 | INFO  | Task cb8a7b82-4b19-4be3-a517-f9062dd5b355 (gather-facts) was prepared for execution. 2026-04-17 00:40:19.565365 | orchestrator | 2026-04-17 00:40:19 | INFO  | It takes a moment until task cb8a7b82-4b19-4be3-a517-f9062dd5b355 (gather-facts) has been started and output is visible here. 2026-04-17 00:40:30.952091 | orchestrator | 2026-04-17 00:40:30.952204 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 00:40:30.952221 | orchestrator | 2026-04-17 00:40:30.952233 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:40:30.952245 | orchestrator | Friday 17 April 2026 00:40:22 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-04-17 00:40:30.952256 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:40:30.952268 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:40:30.952279 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:40:30.952290 | orchestrator | ok: [testbed-manager] 2026-04-17 00:40:30.952301 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:40:30.952312 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:40:30.952323 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:40:30.952334 | orchestrator | 2026-04-17 00:40:30.952345 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 00:40:30.952357 | orchestrator | 2026-04-17 00:40:30.952368 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 00:40:30.952379 | orchestrator | Friday 17 April 2026 00:40:30 +0000 (0:00:08.157) 0:00:08.376 ********** 2026-04-17 00:40:30.952390 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:40:30.952403 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:40:30.952413 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:40:30.952424 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:40:30.952435 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:40:30.952446 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:40:30.952456 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:40:30.952467 | orchestrator | 2026-04-17 00:40:30.952478 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:40:30.952490 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952502 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952517 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952536 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952553 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952570 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952587 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 00:40:30.952603 | orchestrator | 2026-04-17 00:40:30.952651 | orchestrator | 2026-04-17 00:40:30.952670 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:40:30.952691 | orchestrator | Friday 17 April 2026 00:40:30 +0000 (0:00:00.609) 0:00:08.985 ********** 2026-04-17 00:40:30.952710 | orchestrator | =============================================================================== 2026-04-17 00:40:30.952728 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.16s 2026-04-17 00:40:30.952740 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-17 00:40:31.072260 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-17 00:40:31.082160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-17 00:40:31.091841 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-17 00:40:31.101213 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-17 00:40:31.118267 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-17 00:40:31.129861 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-17 00:40:31.144242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-17 00:40:31.157376 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-17 00:40:31.168264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-17 00:40:31.183744 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-17 00:40:31.196089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-17 00:40:31.209418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-17 00:40:31.224708 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-17 00:40:31.240155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-17 00:40:31.257045 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-17 00:40:31.269451 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-17 00:40:31.285887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-17 00:40:31.298639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-17 00:40:31.309855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-17 00:40:31.318344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-17 00:40:31.334078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-17 00:40:31.346925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-17 00:40:31.363866 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-17 00:40:31.377320 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-17 00:40:31.733281 | orchestrator | ok: Runtime: 0:23:29.978896 2026-04-17 00:40:31.845612 | 2026-04-17 00:40:31.845761 | TASK [Deploy services] 2026-04-17 00:40:32.379001 | orchestrator | skipping: Conditional result was False 2026-04-17 00:40:32.396694 | 2026-04-17 00:40:32.396871 | TASK [Deploy in a nutshell] 2026-04-17 00:40:33.111908 | orchestrator | + set -e 2026-04-17 00:40:33.113269 | orchestrator | 2026-04-17 00:40:33.113290 | orchestrator | # PULL IMAGES 2026-04-17 00:40:33.113299 | orchestrator | 2026-04-17 00:40:33.113310 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 00:40:33.113323 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 00:40:33.113332 | orchestrator | ++ INTERACTIVE=false 2026-04-17 00:40:33.113361 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 00:40:33.113375 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 00:40:33.113383 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 00:40:33.113391 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 00:40:33.113403 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 00:40:33.113410 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 00:40:33.113420 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 00:40:33.113427 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 00:40:33.113437 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 00:40:33.113444 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 00:40:33.113454 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 00:40:33.113460 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 00:40:33.113468 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 00:40:33.113475 | orchestrator | ++ export ARA=false 2026-04-17 00:40:33.113481 | orchestrator | ++ ARA=false 2026-04-17 00:40:33.113488 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 00:40:33.113494 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 00:40:33.113501 | orchestrator | ++ export TEMPEST=true 2026-04-17 00:40:33.113506 | orchestrator | ++ TEMPEST=true 2026-04-17 00:40:33.113513 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 00:40:33.113520 | orchestrator | ++ IS_ZUUL=true 2026-04-17 00:40:33.113526 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:40:33.113533 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 00:40:33.113539 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 00:40:33.113563 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 00:40:33.113569 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 00:40:33.113575 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 00:40:33.113582 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 00:40:33.113588 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 00:40:33.113594 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 00:40:33.113601 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 00:40:33.113617 | orchestrator | + echo 2026-04-17 00:40:33.113625 | orchestrator | + echo '# PULL IMAGES' 2026-04-17 00:40:33.113631 | orchestrator | + echo 2026-04-17 00:40:33.113646 | orchestrator | ++ semver latest 7.0.0 2026-04-17 00:40:33.167312 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 00:40:33.167375 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 00:40:33.167380 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-17 00:40:34.353725 | orchestrator | 2026-04-17 00:40:34 | INFO  | Trying to run play pull-images in environment custom 2026-04-17 00:40:44.380986 | orchestrator | 2026-04-17 00:40:44 | INFO  | Prepare task for execution of pull-images. 2026-04-17 00:40:44.458431 | orchestrator | 2026-04-17 00:40:44 | INFO  | Task 2a9501cc-544d-48f9-bd94-5315635f7de3 (pull-images) was prepared for execution. 2026-04-17 00:40:44.458554 | orchestrator | 2026-04-17 00:40:44 | INFO  | Task 2a9501cc-544d-48f9-bd94-5315635f7de3 is running in background. No more output. Check ARA for logs. 2026-04-17 00:40:45.737378 | orchestrator | 2026-04-17 00:40:45 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-17 00:40:55.765536 | orchestrator | 2026-04-17 00:40:55 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-17 00:40:55.847393 | orchestrator | 2026-04-17 00:40:55 | INFO  | Task 17360c07-0b9d-454e-8d30-491ae8142914 (wipe-partitions) was prepared for execution. 2026-04-17 00:40:55.847490 | orchestrator | 2026-04-17 00:40:55 | INFO  | It takes a moment until task 17360c07-0b9d-454e-8d30-491ae8142914 (wipe-partitions) has been started and output is visible here. 2026-04-17 00:41:07.038836 | orchestrator | 2026-04-17 00:41:07.038998 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-17 00:41:07.039020 | orchestrator | 2026-04-17 00:41:07.039033 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-17 00:41:07.039054 | orchestrator | Friday 17 April 2026 00:40:58 +0000 (0:00:00.148) 0:00:00.148 ********** 2026-04-17 00:41:07.039094 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:41:07.039107 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:41:07.039118 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:41:07.039129 | orchestrator | 2026-04-17 00:41:07.039141 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-17 00:41:07.039152 | orchestrator | Friday 17 April 2026 00:40:59 +0000 (0:00:00.949) 0:00:01.098 ********** 2026-04-17 00:41:07.039168 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:07.039179 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:07.039190 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:41:07.039201 | orchestrator | 2026-04-17 00:41:07.039212 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-17 00:41:07.039223 | orchestrator | Friday 17 April 2026 00:40:59 +0000 (0:00:00.224) 0:00:01.323 ********** 2026-04-17 00:41:07.039234 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:07.039245 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:41:07.039256 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:41:07.039267 | orchestrator | 2026-04-17 00:41:07.039280 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-17 00:41:07.039293 | orchestrator | Friday 17 April 2026 00:41:00 +0000 (0:00:00.517) 0:00:01.840 ********** 2026-04-17 00:41:07.039306 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:07.039319 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:07.039332 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:41:07.039344 | orchestrator | 2026-04-17 00:41:07.039357 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-17 00:41:07.039370 | orchestrator | Friday 17 April 2026 00:41:00 +0000 (0:00:00.220) 0:00:02.061 ********** 2026-04-17 00:41:07.039383 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 00:41:07.039400 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 00:41:07.039413 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 00:41:07.039426 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 00:41:07.039439 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 00:41:07.039452 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 00:41:07.039465 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 00:41:07.039478 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 00:41:07.039491 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 00:41:07.039504 | orchestrator | 2026-04-17 00:41:07.039517 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-17 00:41:07.039530 | orchestrator | Friday 17 April 2026 00:41:01 +0000 (0:00:01.266) 0:00:03.328 ********** 2026-04-17 00:41:07.039544 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 00:41:07.039557 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 00:41:07.039606 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 00:41:07.039621 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 00:41:07.039634 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 00:41:07.039646 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 00:41:07.039662 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 00:41:07.039681 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 00:41:07.039698 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 00:41:07.039717 | orchestrator | 2026-04-17 00:41:07.039735 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-17 00:41:07.039754 | orchestrator | Friday 17 April 2026 00:41:03 +0000 (0:00:01.335) 0:00:04.663 ********** 2026-04-17 00:41:07.039773 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-17 00:41:07.039792 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-17 00:41:07.039811 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-17 00:41:07.039839 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-17 00:41:07.039862 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-17 00:41:07.039873 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-17 00:41:07.039884 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-17 00:41:07.039895 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-17 00:41:07.039906 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-17 00:41:07.039916 | orchestrator | 2026-04-17 00:41:07.039935 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-17 00:41:07.039953 | orchestrator | Friday 17 April 2026 00:41:05 +0000 (0:00:02.141) 0:00:06.804 ********** 2026-04-17 00:41:07.039969 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:41:07.039984 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:41:07.040000 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:41:07.040017 | orchestrator | 2026-04-17 00:41:07.040035 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-17 00:41:07.040055 | orchestrator | Friday 17 April 2026 00:41:06 +0000 (0:00:00.587) 0:00:07.392 ********** 2026-04-17 00:41:07.040074 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:41:07.040091 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:41:07.040106 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:41:07.040119 | orchestrator | 2026-04-17 00:41:07.040130 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:41:07.040142 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:07.040154 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:07.040186 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:07.040198 | orchestrator | 2026-04-17 00:41:07.040209 | orchestrator | 2026-04-17 00:41:07.040220 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:41:07.040231 | orchestrator | Friday 17 April 2026 00:41:06 +0000 (0:00:00.778) 0:00:08.171 ********** 2026-04-17 00:41:07.040241 | orchestrator | =============================================================================== 2026-04-17 00:41:07.040252 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-04-17 00:41:07.040263 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2026-04-17 00:41:07.040273 | orchestrator | Check device availability ----------------------------------------------- 1.27s 2026-04-17 00:41:07.040284 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.95s 2026-04-17 00:41:07.040295 | orchestrator | Request device events from the kernel ----------------------------------- 0.78s 2026-04-17 00:41:07.040305 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-04-17 00:41:07.040316 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.52s 2026-04-17 00:41:07.040327 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2026-04-17 00:41:07.040338 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-17 00:41:18.570584 | orchestrator | 2026-04-17 00:41:18 | INFO  | Prepare task for execution of facts. 2026-04-17 00:41:18.639755 | orchestrator | 2026-04-17 00:41:18 | INFO  | Task 5921ac53-0df7-423a-b9ad-93f86732f9db (facts) was prepared for execution. 2026-04-17 00:41:18.639835 | orchestrator | 2026-04-17 00:41:18 | INFO  | It takes a moment until task 5921ac53-0df7-423a-b9ad-93f86732f9db (facts) has been started and output is visible here. 2026-04-17 00:41:29.857052 | orchestrator | 2026-04-17 00:41:29.857146 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 00:41:29.857157 | orchestrator | 2026-04-17 00:41:29.857184 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 00:41:29.857191 | orchestrator | Friday 17 April 2026 00:41:21 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-04-17 00:41:29.857198 | orchestrator | ok: [testbed-manager] 2026-04-17 00:41:29.857206 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:41:29.857212 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:41:29.857218 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:29.857224 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:41:29.857230 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:41:29.857236 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:41:29.857242 | orchestrator | 2026-04-17 00:41:29.857262 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 00:41:29.857269 | orchestrator | Friday 17 April 2026 00:41:23 +0000 (0:00:01.275) 0:00:01.612 ********** 2026-04-17 00:41:29.857275 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:41:29.857282 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:41:29.857288 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:41:29.857295 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:41:29.857301 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:29.857307 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:29.857313 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:41:29.857319 | orchestrator | 2026-04-17 00:41:29.857325 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 00:41:29.857331 | orchestrator | 2026-04-17 00:41:29.857337 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:41:29.857344 | orchestrator | Friday 17 April 2026 00:41:24 +0000 (0:00:01.179) 0:00:02.791 ********** 2026-04-17 00:41:29.857351 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:41:29.857357 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:41:29.857363 | orchestrator | ok: [testbed-manager] 2026-04-17 00:41:29.857369 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:41:29.857375 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:29.857381 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:41:29.857387 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:41:29.857393 | orchestrator | 2026-04-17 00:41:29.857399 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 00:41:29.857405 | orchestrator | 2026-04-17 00:41:29.857411 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 00:41:29.857418 | orchestrator | Friday 17 April 2026 00:41:29 +0000 (0:00:04.876) 0:00:07.667 ********** 2026-04-17 00:41:29.857424 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:41:29.857430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:41:29.857436 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:41:29.857442 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:41:29.857448 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:29.857455 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:29.857461 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:41:29.857467 | orchestrator | 2026-04-17 00:41:29.857473 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:41:29.857480 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857487 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857493 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857499 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857505 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857519 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857525 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:41:29.857531 | orchestrator | 2026-04-17 00:41:29.857621 | orchestrator | 2026-04-17 00:41:29.857634 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:41:29.857645 | orchestrator | Friday 17 April 2026 00:41:29 +0000 (0:00:00.436) 0:00:08.104 ********** 2026-04-17 00:41:29.857656 | orchestrator | =============================================================================== 2026-04-17 00:41:29.857667 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.88s 2026-04-17 00:41:29.857675 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2026-04-17 00:41:29.857682 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2026-04-17 00:41:29.857689 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-04-17 00:41:31.116274 | orchestrator | 2026-04-17 00:41:31 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-17 00:41:31.166825 | orchestrator | 2026-04-17 00:41:31 | INFO  | Task cce9c90a-6548-4f27-881b-1e263851735d (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-17 00:41:31.166915 | orchestrator | 2026-04-17 00:41:31 | INFO  | It takes a moment until task cce9c90a-6548-4f27-881b-1e263851735d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-17 00:41:41.887193 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 00:41:41.887342 | orchestrator | 2.16.14 2026-04-17 00:41:41.887366 | orchestrator | 2026-04-17 00:41:41.887411 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 00:41:41.887430 | orchestrator | 2026-04-17 00:41:41.887449 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:41:41.887467 | orchestrator | Friday 17 April 2026 00:41:35 +0000 (0:00:00.299) 0:00:00.299 ********** 2026-04-17 00:41:41.887486 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 00:41:41.887504 | orchestrator | 2026-04-17 00:41:41.887578 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:41:41.887600 | orchestrator | Friday 17 April 2026 00:41:35 +0000 (0:00:00.246) 0:00:00.546 ********** 2026-04-17 00:41:41.887619 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:41.887638 | orchestrator | 2026-04-17 00:41:41.887657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.887676 | orchestrator | Friday 17 April 2026 00:41:35 +0000 (0:00:00.212) 0:00:00.758 ********** 2026-04-17 00:41:41.887693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-17 00:41:41.887711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-17 00:41:41.887728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-17 00:41:41.887747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-17 00:41:41.887766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-17 00:41:41.887786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-17 00:41:41.887806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-17 00:41:41.887826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-17 00:41:41.887846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-17 00:41:41.887865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-17 00:41:41.887922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-17 00:41:41.887943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-17 00:41:41.887963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-17 00:41:41.887981 | orchestrator | 2026-04-17 00:41:41.888000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888021 | orchestrator | Friday 17 April 2026 00:41:36 +0000 (0:00:00.399) 0:00:01.158 ********** 2026-04-17 00:41:41.888042 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888062 | orchestrator | 2026-04-17 00:41:41.888082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888102 | orchestrator | Friday 17 April 2026 00:41:36 +0000 (0:00:00.456) 0:00:01.615 ********** 2026-04-17 00:41:41.888122 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888142 | orchestrator | 2026-04-17 00:41:41.888162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888188 | orchestrator | Friday 17 April 2026 00:41:36 +0000 (0:00:00.175) 0:00:01.791 ********** 2026-04-17 00:41:41.888207 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888227 | orchestrator | 2026-04-17 00:41:41.888248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888268 | orchestrator | Friday 17 April 2026 00:41:36 +0000 (0:00:00.195) 0:00:01.987 ********** 2026-04-17 00:41:41.888323 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888343 | orchestrator | 2026-04-17 00:41:41.888363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888383 | orchestrator | Friday 17 April 2026 00:41:37 +0000 (0:00:00.190) 0:00:02.177 ********** 2026-04-17 00:41:41.888402 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888420 | orchestrator | 2026-04-17 00:41:41.888437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888454 | orchestrator | Friday 17 April 2026 00:41:37 +0000 (0:00:00.199) 0:00:02.377 ********** 2026-04-17 00:41:41.888472 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888489 | orchestrator | 2026-04-17 00:41:41.888505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888548 | orchestrator | Friday 17 April 2026 00:41:37 +0000 (0:00:00.199) 0:00:02.577 ********** 2026-04-17 00:41:41.888569 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888586 | orchestrator | 2026-04-17 00:41:41.888603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888621 | orchestrator | Friday 17 April 2026 00:41:37 +0000 (0:00:00.182) 0:00:02.760 ********** 2026-04-17 00:41:41.888637 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.888654 | orchestrator | 2026-04-17 00:41:41.888673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888689 | orchestrator | Friday 17 April 2026 00:41:37 +0000 (0:00:00.191) 0:00:02.951 ********** 2026-04-17 00:41:41.888707 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356) 2026-04-17 00:41:41.888727 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356) 2026-04-17 00:41:41.888744 | orchestrator | 2026-04-17 00:41:41.888763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888814 | orchestrator | Friday 17 April 2026 00:41:38 +0000 (0:00:00.378) 0:00:03.330 ********** 2026-04-17 00:41:41.888837 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7) 2026-04-17 00:41:41.888856 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7) 2026-04-17 00:41:41.888874 | orchestrator | 2026-04-17 00:41:41.888891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888931 | orchestrator | Friday 17 April 2026 00:41:38 +0000 (0:00:00.398) 0:00:03.729 ********** 2026-04-17 00:41:41.888948 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c) 2026-04-17 00:41:41.888959 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c) 2026-04-17 00:41:41.888970 | orchestrator | 2026-04-17 00:41:41.888980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.888991 | orchestrator | Friday 17 April 2026 00:41:39 +0000 (0:00:00.594) 0:00:04.323 ********** 2026-04-17 00:41:41.889002 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9) 2026-04-17 00:41:41.889013 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9) 2026-04-17 00:41:41.889023 | orchestrator | 2026-04-17 00:41:41.889034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:41.889045 | orchestrator | Friday 17 April 2026 00:41:39 +0000 (0:00:00.509) 0:00:04.832 ********** 2026-04-17 00:41:41.889056 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:41:41.889066 | orchestrator | 2026-04-17 00:41:41.889077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889088 | orchestrator | Friday 17 April 2026 00:41:40 +0000 (0:00:00.558) 0:00:05.391 ********** 2026-04-17 00:41:41.889111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-17 00:41:41.889122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-17 00:41:41.889133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-17 00:41:41.889143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-17 00:41:41.889154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-17 00:41:41.889165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-17 00:41:41.889175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-17 00:41:41.889186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-17 00:41:41.889197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-17 00:41:41.889208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-17 00:41:41.889218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-17 00:41:41.889229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-17 00:41:41.889240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-17 00:41:41.889250 | orchestrator | 2026-04-17 00:41:41.889261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889272 | orchestrator | Friday 17 April 2026 00:41:40 +0000 (0:00:00.333) 0:00:05.724 ********** 2026-04-17 00:41:41.889282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889293 | orchestrator | 2026-04-17 00:41:41.889303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889314 | orchestrator | Friday 17 April 2026 00:41:40 +0000 (0:00:00.173) 0:00:05.898 ********** 2026-04-17 00:41:41.889325 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889335 | orchestrator | 2026-04-17 00:41:41.889346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889357 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.161) 0:00:06.059 ********** 2026-04-17 00:41:41.889367 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889385 | orchestrator | 2026-04-17 00:41:41.889396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889406 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.169) 0:00:06.229 ********** 2026-04-17 00:41:41.889417 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889427 | orchestrator | 2026-04-17 00:41:41.889438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889449 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.173) 0:00:06.403 ********** 2026-04-17 00:41:41.889459 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889470 | orchestrator | 2026-04-17 00:41:41.889499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889511 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.176) 0:00:06.580 ********** 2026-04-17 00:41:41.889549 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889562 | orchestrator | 2026-04-17 00:41:41.889573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:41.889584 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.181) 0:00:06.761 ********** 2026-04-17 00:41:41.889595 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:41.889606 | orchestrator | 2026-04-17 00:41:41.889626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356381 | orchestrator | Friday 17 April 2026 00:41:41 +0000 (0:00:00.171) 0:00:06.932 ********** 2026-04-17 00:41:48.356555 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.356581 | orchestrator | 2026-04-17 00:41:48.356600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356631 | orchestrator | Friday 17 April 2026 00:41:42 +0000 (0:00:00.181) 0:00:07.113 ********** 2026-04-17 00:41:48.356648 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-17 00:41:48.356665 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-17 00:41:48.356682 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-17 00:41:48.356699 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-17 00:41:48.356716 | orchestrator | 2026-04-17 00:41:48.356733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356750 | orchestrator | Friday 17 April 2026 00:41:42 +0000 (0:00:00.852) 0:00:07.966 ********** 2026-04-17 00:41:48.356768 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.356785 | orchestrator | 2026-04-17 00:41:48.356802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356818 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.173) 0:00:08.139 ********** 2026-04-17 00:41:48.356836 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.356853 | orchestrator | 2026-04-17 00:41:48.356870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356886 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.168) 0:00:08.308 ********** 2026-04-17 00:41:48.356903 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.356919 | orchestrator | 2026-04-17 00:41:48.356936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:48.356954 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.153) 0:00:08.462 ********** 2026-04-17 00:41:48.356970 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.356987 | orchestrator | 2026-04-17 00:41:48.357003 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 00:41:48.357021 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.176) 0:00:08.638 ********** 2026-04-17 00:41:48.357040 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-17 00:41:48.357058 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-17 00:41:48.357075 | orchestrator | 2026-04-17 00:41:48.357090 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 00:41:48.357105 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.173) 0:00:08.812 ********** 2026-04-17 00:41:48.357153 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357170 | orchestrator | 2026-04-17 00:41:48.357186 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 00:41:48.357202 | orchestrator | Friday 17 April 2026 00:41:43 +0000 (0:00:00.123) 0:00:08.936 ********** 2026-04-17 00:41:48.357218 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357234 | orchestrator | 2026-04-17 00:41:48.357254 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 00:41:48.357272 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.120) 0:00:09.056 ********** 2026-04-17 00:41:48.357287 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357303 | orchestrator | 2026-04-17 00:41:48.357320 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 00:41:48.357337 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.130) 0:00:09.187 ********** 2026-04-17 00:41:48.357353 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:48.357370 | orchestrator | 2026-04-17 00:41:48.357386 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 00:41:48.357402 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.123) 0:00:09.311 ********** 2026-04-17 00:41:48.357419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}}) 2026-04-17 00:41:48.357436 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}}) 2026-04-17 00:41:48.357453 | orchestrator | 2026-04-17 00:41:48.357469 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 00:41:48.357485 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.128) 0:00:09.440 ********** 2026-04-17 00:41:48.357502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}})  2026-04-17 00:41:48.357557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}})  2026-04-17 00:41:48.357574 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357590 | orchestrator | 2026-04-17 00:41:48.357607 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 00:41:48.357624 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.136) 0:00:09.576 ********** 2026-04-17 00:41:48.357640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}})  2026-04-17 00:41:48.357656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}})  2026-04-17 00:41:48.357672 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357688 | orchestrator | 2026-04-17 00:41:48.357704 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 00:41:48.357721 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.269) 0:00:09.846 ********** 2026-04-17 00:41:48.357737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}})  2026-04-17 00:41:48.357772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}})  2026-04-17 00:41:48.357783 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357793 | orchestrator | 2026-04-17 00:41:48.357802 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 00:41:48.357812 | orchestrator | Friday 17 April 2026 00:41:44 +0000 (0:00:00.157) 0:00:10.003 ********** 2026-04-17 00:41:48.357821 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:48.357831 | orchestrator | 2026-04-17 00:41:48.357840 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 00:41:48.357850 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.119) 0:00:10.123 ********** 2026-04-17 00:41:48.357859 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:41:48.357879 | orchestrator | 2026-04-17 00:41:48.357889 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 00:41:48.357898 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.116) 0:00:10.239 ********** 2026-04-17 00:41:48.357907 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357918 | orchestrator | 2026-04-17 00:41:48.357936 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 00:41:48.357946 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.117) 0:00:10.357 ********** 2026-04-17 00:41:48.357956 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.357965 | orchestrator | 2026-04-17 00:41:48.357975 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 00:41:48.357992 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.112) 0:00:10.470 ********** 2026-04-17 00:41:48.358008 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.358089 | orchestrator | 2026-04-17 00:41:48.358108 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 00:41:48.358126 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.122) 0:00:10.592 ********** 2026-04-17 00:41:48.358142 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:41:48.358160 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:41:48.358177 | orchestrator |  "sdb": { 2026-04-17 00:41:48.358195 | orchestrator |  "osd_lvm_uuid": "2bf72114-67c4-59b2-99b4-0dc6e46ccf1e" 2026-04-17 00:41:48.358213 | orchestrator |  }, 2026-04-17 00:41:48.358230 | orchestrator |  "sdc": { 2026-04-17 00:41:48.358244 | orchestrator |  "osd_lvm_uuid": "ecb05008-8fcc-5a4f-bdd9-0d58d51e77db" 2026-04-17 00:41:48.358253 | orchestrator |  } 2026-04-17 00:41:48.358263 | orchestrator |  } 2026-04-17 00:41:48.358273 | orchestrator | } 2026-04-17 00:41:48.358282 | orchestrator | 2026-04-17 00:41:48.358292 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 00:41:48.358301 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.126) 0:00:10.718 ********** 2026-04-17 00:41:48.358311 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.358320 | orchestrator | 2026-04-17 00:41:48.358329 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 00:41:48.358339 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.122) 0:00:10.840 ********** 2026-04-17 00:41:48.358348 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.358358 | orchestrator | 2026-04-17 00:41:48.358367 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 00:41:48.358381 | orchestrator | Friday 17 April 2026 00:41:45 +0000 (0:00:00.117) 0:00:10.958 ********** 2026-04-17 00:41:48.358396 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:41:48.358412 | orchestrator | 2026-04-17 00:41:48.358428 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 00:41:48.358445 | orchestrator | Friday 17 April 2026 00:41:46 +0000 (0:00:00.122) 0:00:11.080 ********** 2026-04-17 00:41:48.358462 | orchestrator | changed: [testbed-node-3] => { 2026-04-17 00:41:48.358478 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 00:41:48.358495 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:41:48.358511 | orchestrator |  "sdb": { 2026-04-17 00:41:48.358582 | orchestrator |  "osd_lvm_uuid": "2bf72114-67c4-59b2-99b4-0dc6e46ccf1e" 2026-04-17 00:41:48.358600 | orchestrator |  }, 2026-04-17 00:41:48.358616 | orchestrator |  "sdc": { 2026-04-17 00:41:48.358632 | orchestrator |  "osd_lvm_uuid": "ecb05008-8fcc-5a4f-bdd9-0d58d51e77db" 2026-04-17 00:41:48.358649 | orchestrator |  } 2026-04-17 00:41:48.358666 | orchestrator |  }, 2026-04-17 00:41:48.358681 | orchestrator |  "lvm_volumes": [ 2026-04-17 00:41:48.358697 | orchestrator |  { 2026-04-17 00:41:48.358714 | orchestrator |  "data": "osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e", 2026-04-17 00:41:48.358731 | orchestrator |  "data_vg": "ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e" 2026-04-17 00:41:48.358759 | orchestrator |  }, 2026-04-17 00:41:48.358775 | orchestrator |  { 2026-04-17 00:41:48.358793 | orchestrator |  "data": "osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db", 2026-04-17 00:41:48.358808 | orchestrator |  "data_vg": "ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db" 2026-04-17 00:41:48.358824 | orchestrator |  } 2026-04-17 00:41:48.358841 | orchestrator |  ] 2026-04-17 00:41:48.358857 | orchestrator |  } 2026-04-17 00:41:48.358873 | orchestrator | } 2026-04-17 00:41:48.358889 | orchestrator | 2026-04-17 00:41:48.358905 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 00:41:48.358921 | orchestrator | Friday 17 April 2026 00:41:46 +0000 (0:00:00.293) 0:00:11.374 ********** 2026-04-17 00:41:48.358937 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 00:41:48.358953 | orchestrator | 2026-04-17 00:41:48.358968 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 00:41:48.358985 | orchestrator | 2026-04-17 00:41:48.359000 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:41:48.359015 | orchestrator | Friday 17 April 2026 00:41:47 +0000 (0:00:01.608) 0:00:12.982 ********** 2026-04-17 00:41:48.359030 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 00:41:48.359048 | orchestrator | 2026-04-17 00:41:48.359072 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:41:48.359090 | orchestrator | Friday 17 April 2026 00:41:48 +0000 (0:00:00.224) 0:00:13.207 ********** 2026-04-17 00:41:48.359106 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:41:48.359123 | orchestrator | 2026-04-17 00:41:48.359152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485092 | orchestrator | Friday 17 April 2026 00:41:48 +0000 (0:00:00.195) 0:00:13.403 ********** 2026-04-17 00:41:55.485205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-17 00:41:55.485221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-17 00:41:55.485233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-17 00:41:55.485244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-17 00:41:55.485255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-17 00:41:55.485266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-17 00:41:55.485277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-17 00:41:55.485293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-17 00:41:55.485304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-17 00:41:55.485316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-17 00:41:55.485327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-17 00:41:55.485338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-17 00:41:55.485348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-17 00:41:55.485359 | orchestrator | 2026-04-17 00:41:55.485370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485382 | orchestrator | Friday 17 April 2026 00:41:48 +0000 (0:00:00.358) 0:00:13.762 ********** 2026-04-17 00:41:55.485393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485404 | orchestrator | 2026-04-17 00:41:55.485415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485426 | orchestrator | Friday 17 April 2026 00:41:48 +0000 (0:00:00.195) 0:00:13.957 ********** 2026-04-17 00:41:55.485459 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485471 | orchestrator | 2026-04-17 00:41:55.485481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485492 | orchestrator | Friday 17 April 2026 00:41:49 +0000 (0:00:00.191) 0:00:14.149 ********** 2026-04-17 00:41:55.485503 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485577 | orchestrator | 2026-04-17 00:41:55.485591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485603 | orchestrator | Friday 17 April 2026 00:41:49 +0000 (0:00:00.194) 0:00:14.343 ********** 2026-04-17 00:41:55.485615 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485628 | orchestrator | 2026-04-17 00:41:55.485640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485652 | orchestrator | Friday 17 April 2026 00:41:49 +0000 (0:00:00.189) 0:00:14.533 ********** 2026-04-17 00:41:55.485664 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485677 | orchestrator | 2026-04-17 00:41:55.485688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485699 | orchestrator | Friday 17 April 2026 00:41:50 +0000 (0:00:00.585) 0:00:15.118 ********** 2026-04-17 00:41:55.485710 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485720 | orchestrator | 2026-04-17 00:41:55.485730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485741 | orchestrator | Friday 17 April 2026 00:41:50 +0000 (0:00:00.191) 0:00:15.309 ********** 2026-04-17 00:41:55.485752 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485762 | orchestrator | 2026-04-17 00:41:55.485773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485783 | orchestrator | Friday 17 April 2026 00:41:50 +0000 (0:00:00.172) 0:00:15.482 ********** 2026-04-17 00:41:55.485794 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.485804 | orchestrator | 2026-04-17 00:41:55.485815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485826 | orchestrator | Friday 17 April 2026 00:41:50 +0000 (0:00:00.187) 0:00:15.670 ********** 2026-04-17 00:41:55.485836 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd) 2026-04-17 00:41:55.485848 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd) 2026-04-17 00:41:55.485858 | orchestrator | 2026-04-17 00:41:55.485888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485899 | orchestrator | Friday 17 April 2026 00:41:51 +0000 (0:00:00.410) 0:00:16.080 ********** 2026-04-17 00:41:55.485910 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20) 2026-04-17 00:41:55.485921 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20) 2026-04-17 00:41:55.485932 | orchestrator | 2026-04-17 00:41:55.485942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.485953 | orchestrator | Friday 17 April 2026 00:41:51 +0000 (0:00:00.367) 0:00:16.448 ********** 2026-04-17 00:41:55.485964 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128) 2026-04-17 00:41:55.485975 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128) 2026-04-17 00:41:55.485986 | orchestrator | 2026-04-17 00:41:55.485997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.486085 | orchestrator | Friday 17 April 2026 00:41:51 +0000 (0:00:00.454) 0:00:16.902 ********** 2026-04-17 00:41:55.486099 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe) 2026-04-17 00:41:55.486110 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe) 2026-04-17 00:41:55.486121 | orchestrator | 2026-04-17 00:41:55.486164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:41:55.486177 | orchestrator | Friday 17 April 2026 00:41:52 +0000 (0:00:00.385) 0:00:17.287 ********** 2026-04-17 00:41:55.486188 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:41:55.486198 | orchestrator | 2026-04-17 00:41:55.486209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486219 | orchestrator | Friday 17 April 2026 00:41:52 +0000 (0:00:00.301) 0:00:17.589 ********** 2026-04-17 00:41:55.486230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-17 00:41:55.486240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-17 00:41:55.486251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-17 00:41:55.486262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-17 00:41:55.486272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-17 00:41:55.486282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-17 00:41:55.486293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-17 00:41:55.486304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-17 00:41:55.486314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-17 00:41:55.486325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-17 00:41:55.486336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-17 00:41:55.486346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-17 00:41:55.486356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-17 00:41:55.486367 | orchestrator | 2026-04-17 00:41:55.486378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486388 | orchestrator | Friday 17 April 2026 00:41:52 +0000 (0:00:00.343) 0:00:17.932 ********** 2026-04-17 00:41:55.486399 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486409 | orchestrator | 2026-04-17 00:41:55.486420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486431 | orchestrator | Friday 17 April 2026 00:41:53 +0000 (0:00:00.177) 0:00:18.110 ********** 2026-04-17 00:41:55.486441 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486452 | orchestrator | 2026-04-17 00:41:55.486462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486473 | orchestrator | Friday 17 April 2026 00:41:53 +0000 (0:00:00.498) 0:00:18.608 ********** 2026-04-17 00:41:55.486484 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486495 | orchestrator | 2026-04-17 00:41:55.486524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486536 | orchestrator | Friday 17 April 2026 00:41:53 +0000 (0:00:00.178) 0:00:18.786 ********** 2026-04-17 00:41:55.486547 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486557 | orchestrator | 2026-04-17 00:41:55.486568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486579 | orchestrator | Friday 17 April 2026 00:41:53 +0000 (0:00:00.228) 0:00:19.015 ********** 2026-04-17 00:41:55.486589 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486600 | orchestrator | 2026-04-17 00:41:55.486611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486622 | orchestrator | Friday 17 April 2026 00:41:54 +0000 (0:00:00.182) 0:00:19.197 ********** 2026-04-17 00:41:55.486632 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486665 | orchestrator | 2026-04-17 00:41:55.486695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486707 | orchestrator | Friday 17 April 2026 00:41:54 +0000 (0:00:00.172) 0:00:19.370 ********** 2026-04-17 00:41:55.486718 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486729 | orchestrator | 2026-04-17 00:41:55.486740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486750 | orchestrator | Friday 17 April 2026 00:41:54 +0000 (0:00:00.176) 0:00:19.546 ********** 2026-04-17 00:41:55.486761 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:41:55.486772 | orchestrator | 2026-04-17 00:41:55.486782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486793 | orchestrator | Friday 17 April 2026 00:41:54 +0000 (0:00:00.177) 0:00:19.724 ********** 2026-04-17 00:41:55.486803 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-17 00:41:55.486815 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-17 00:41:55.486826 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-17 00:41:55.486837 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-17 00:41:55.486847 | orchestrator | 2026-04-17 00:41:55.486858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:41:55.486869 | orchestrator | Friday 17 April 2026 00:41:55 +0000 (0:00:00.710) 0:00:20.435 ********** 2026-04-17 00:41:55.486879 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.844773 | orchestrator | 2026-04-17 00:42:00.844883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:00.844901 | orchestrator | Friday 17 April 2026 00:41:55 +0000 (0:00:00.164) 0:00:20.599 ********** 2026-04-17 00:42:00.844913 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.844924 | orchestrator | 2026-04-17 00:42:00.844936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:00.844947 | orchestrator | Friday 17 April 2026 00:41:55 +0000 (0:00:00.164) 0:00:20.764 ********** 2026-04-17 00:42:00.844958 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.844968 | orchestrator | 2026-04-17 00:42:00.844979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:00.844990 | orchestrator | Friday 17 April 2026 00:41:55 +0000 (0:00:00.174) 0:00:20.938 ********** 2026-04-17 00:42:00.845001 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845012 | orchestrator | 2026-04-17 00:42:00.845023 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 00:42:00.845034 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.162) 0:00:21.101 ********** 2026-04-17 00:42:00.845044 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-17 00:42:00.845055 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-17 00:42:00.845066 | orchestrator | 2026-04-17 00:42:00.845078 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 00:42:00.845088 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.306) 0:00:21.408 ********** 2026-04-17 00:42:00.845099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845110 | orchestrator | 2026-04-17 00:42:00.845121 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 00:42:00.845131 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.106) 0:00:21.514 ********** 2026-04-17 00:42:00.845142 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845153 | orchestrator | 2026-04-17 00:42:00.845163 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 00:42:00.845174 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.125) 0:00:21.639 ********** 2026-04-17 00:42:00.845185 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845196 | orchestrator | 2026-04-17 00:42:00.845207 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 00:42:00.845218 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.123) 0:00:21.763 ********** 2026-04-17 00:42:00.845252 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:42:00.845264 | orchestrator | 2026-04-17 00:42:00.845275 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 00:42:00.845286 | orchestrator | Friday 17 April 2026 00:41:56 +0000 (0:00:00.140) 0:00:21.904 ********** 2026-04-17 00:42:00.845300 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f135813a-7de6-5823-bba0-0d89f58fd8f7'}}) 2026-04-17 00:42:00.845313 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}}) 2026-04-17 00:42:00.845326 | orchestrator | 2026-04-17 00:42:00.845339 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 00:42:00.845351 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.197) 0:00:22.101 ********** 2026-04-17 00:42:00.845364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f135813a-7de6-5823-bba0-0d89f58fd8f7'}})  2026-04-17 00:42:00.845378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}})  2026-04-17 00:42:00.845390 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845401 | orchestrator | 2026-04-17 00:42:00.845411 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 00:42:00.845422 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.141) 0:00:22.243 ********** 2026-04-17 00:42:00.845433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f135813a-7de6-5823-bba0-0d89f58fd8f7'}})  2026-04-17 00:42:00.845444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}})  2026-04-17 00:42:00.845456 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845466 | orchestrator | 2026-04-17 00:42:00.845477 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 00:42:00.845488 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.161) 0:00:22.405 ********** 2026-04-17 00:42:00.845498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f135813a-7de6-5823-bba0-0d89f58fd8f7'}})  2026-04-17 00:42:00.845571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}})  2026-04-17 00:42:00.845583 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845593 | orchestrator | 2026-04-17 00:42:00.845623 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 00:42:00.845634 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.120) 0:00:22.525 ********** 2026-04-17 00:42:00.845645 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:42:00.845656 | orchestrator | 2026-04-17 00:42:00.845666 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 00:42:00.845677 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.098) 0:00:22.624 ********** 2026-04-17 00:42:00.845688 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:42:00.845698 | orchestrator | 2026-04-17 00:42:00.845709 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 00:42:00.845720 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.086) 0:00:22.710 ********** 2026-04-17 00:42:00.845773 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845785 | orchestrator | 2026-04-17 00:42:00.845796 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 00:42:00.845806 | orchestrator | Friday 17 April 2026 00:41:57 +0000 (0:00:00.090) 0:00:22.801 ********** 2026-04-17 00:42:00.845817 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845827 | orchestrator | 2026-04-17 00:42:00.845838 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 00:42:00.845848 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.255) 0:00:23.056 ********** 2026-04-17 00:42:00.845859 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.845934 | orchestrator | 2026-04-17 00:42:00.845946 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 00:42:00.845957 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.103) 0:00:23.160 ********** 2026-04-17 00:42:00.845967 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:42:00.845978 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:42:00.845989 | orchestrator |  "sdb": { 2026-04-17 00:42:00.846000 | orchestrator |  "osd_lvm_uuid": "f135813a-7de6-5823-bba0-0d89f58fd8f7" 2026-04-17 00:42:00.846011 | orchestrator |  }, 2026-04-17 00:42:00.846081 | orchestrator |  "sdc": { 2026-04-17 00:42:00.846092 | orchestrator |  "osd_lvm_uuid": "96c1a302-a68f-51af-8cb0-5deb1c72c0bb" 2026-04-17 00:42:00.846103 | orchestrator |  } 2026-04-17 00:42:00.846114 | orchestrator |  } 2026-04-17 00:42:00.846125 | orchestrator | } 2026-04-17 00:42:00.846136 | orchestrator | 2026-04-17 00:42:00.846147 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 00:42:00.846158 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.125) 0:00:23.286 ********** 2026-04-17 00:42:00.846169 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.846180 | orchestrator | 2026-04-17 00:42:00.846191 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 00:42:00.846201 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.143) 0:00:23.429 ********** 2026-04-17 00:42:00.846212 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.846223 | orchestrator | 2026-04-17 00:42:00.846233 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 00:42:00.846244 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.145) 0:00:23.575 ********** 2026-04-17 00:42:00.846255 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:42:00.846265 | orchestrator | 2026-04-17 00:42:00.846276 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 00:42:00.846287 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.104) 0:00:23.679 ********** 2026-04-17 00:42:00.846298 | orchestrator | changed: [testbed-node-4] => { 2026-04-17 00:42:00.846309 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 00:42:00.846319 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:42:00.846330 | orchestrator |  "sdb": { 2026-04-17 00:42:00.846341 | orchestrator |  "osd_lvm_uuid": "f135813a-7de6-5823-bba0-0d89f58fd8f7" 2026-04-17 00:42:00.846352 | orchestrator |  }, 2026-04-17 00:42:00.846363 | orchestrator |  "sdc": { 2026-04-17 00:42:00.846374 | orchestrator |  "osd_lvm_uuid": "96c1a302-a68f-51af-8cb0-5deb1c72c0bb" 2026-04-17 00:42:00.846384 | orchestrator |  } 2026-04-17 00:42:00.846395 | orchestrator |  }, 2026-04-17 00:42:00.846406 | orchestrator |  "lvm_volumes": [ 2026-04-17 00:42:00.846416 | orchestrator |  { 2026-04-17 00:42:00.846427 | orchestrator |  "data": "osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7", 2026-04-17 00:42:00.846438 | orchestrator |  "data_vg": "ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7" 2026-04-17 00:42:00.846449 | orchestrator |  }, 2026-04-17 00:42:00.846459 | orchestrator |  { 2026-04-17 00:42:00.846470 | orchestrator |  "data": "osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb", 2026-04-17 00:42:00.846481 | orchestrator |  "data_vg": "ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb" 2026-04-17 00:42:00.846491 | orchestrator |  } 2026-04-17 00:42:00.846521 | orchestrator |  ] 2026-04-17 00:42:00.846533 | orchestrator |  } 2026-04-17 00:42:00.846544 | orchestrator | } 2026-04-17 00:42:00.846555 | orchestrator | 2026-04-17 00:42:00.846566 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 00:42:00.846577 | orchestrator | Friday 17 April 2026 00:41:58 +0000 (0:00:00.188) 0:00:23.868 ********** 2026-04-17 00:42:00.846588 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 00:42:00.846598 | orchestrator | 2026-04-17 00:42:00.846633 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-17 00:42:00.846645 | orchestrator | 2026-04-17 00:42:00.846657 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:42:00.846667 | orchestrator | Friday 17 April 2026 00:41:59 +0000 (0:00:00.817) 0:00:24.685 ********** 2026-04-17 00:42:00.846678 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 00:42:00.846689 | orchestrator | 2026-04-17 00:42:00.846700 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:42:00.846710 | orchestrator | Friday 17 April 2026 00:42:00 +0000 (0:00:00.422) 0:00:25.108 ********** 2026-04-17 00:42:00.846721 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:42:00.846732 | orchestrator | 2026-04-17 00:42:00.846743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:00.846753 | orchestrator | Friday 17 April 2026 00:42:00 +0000 (0:00:00.506) 0:00:25.614 ********** 2026-04-17 00:42:00.846764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-17 00:42:00.846775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-17 00:42:00.846786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-17 00:42:00.846797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-17 00:42:00.846807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-17 00:42:00.846827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-17 00:42:09.008225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-17 00:42:09.008309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-17 00:42:09.008319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-17 00:42:09.008326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-17 00:42:09.008347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-17 00:42:09.008354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-17 00:42:09.008361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-17 00:42:09.008368 | orchestrator | 2026-04-17 00:42:09.008375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008382 | orchestrator | Friday 17 April 2026 00:42:00 +0000 (0:00:00.366) 0:00:25.981 ********** 2026-04-17 00:42:09.008389 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008396 | orchestrator | 2026-04-17 00:42:09.008402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008409 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.163) 0:00:26.145 ********** 2026-04-17 00:42:09.008415 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008421 | orchestrator | 2026-04-17 00:42:09.008428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008434 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.169) 0:00:26.315 ********** 2026-04-17 00:42:09.008440 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008446 | orchestrator | 2026-04-17 00:42:09.008453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008459 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.171) 0:00:26.486 ********** 2026-04-17 00:42:09.008468 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008474 | orchestrator | 2026-04-17 00:42:09.008481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008487 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.162) 0:00:26.649 ********** 2026-04-17 00:42:09.008570 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008578 | orchestrator | 2026-04-17 00:42:09.008584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008590 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.171) 0:00:26.820 ********** 2026-04-17 00:42:09.008596 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008602 | orchestrator | 2026-04-17 00:42:09.008609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008615 | orchestrator | Friday 17 April 2026 00:42:01 +0000 (0:00:00.189) 0:00:27.010 ********** 2026-04-17 00:42:09.008621 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008627 | orchestrator | 2026-04-17 00:42:09.008634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008640 | orchestrator | Friday 17 April 2026 00:42:02 +0000 (0:00:00.180) 0:00:27.191 ********** 2026-04-17 00:42:09.008646 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008652 | orchestrator | 2026-04-17 00:42:09.008659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008665 | orchestrator | Friday 17 April 2026 00:42:02 +0000 (0:00:00.186) 0:00:27.377 ********** 2026-04-17 00:42:09.008671 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6) 2026-04-17 00:42:09.008679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6) 2026-04-17 00:42:09.008685 | orchestrator | 2026-04-17 00:42:09.008691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008698 | orchestrator | Friday 17 April 2026 00:42:02 +0000 (0:00:00.564) 0:00:27.941 ********** 2026-04-17 00:42:09.008704 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363) 2026-04-17 00:42:09.008710 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363) 2026-04-17 00:42:09.008716 | orchestrator | 2026-04-17 00:42:09.008723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008729 | orchestrator | Friday 17 April 2026 00:42:03 +0000 (0:00:00.815) 0:00:28.757 ********** 2026-04-17 00:42:09.008735 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06) 2026-04-17 00:42:09.008741 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06) 2026-04-17 00:42:09.008747 | orchestrator | 2026-04-17 00:42:09.008754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008760 | orchestrator | Friday 17 April 2026 00:42:04 +0000 (0:00:00.581) 0:00:29.338 ********** 2026-04-17 00:42:09.008766 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b) 2026-04-17 00:42:09.008772 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b) 2026-04-17 00:42:09.008779 | orchestrator | 2026-04-17 00:42:09.008786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:42:09.008794 | orchestrator | Friday 17 April 2026 00:42:04 +0000 (0:00:00.450) 0:00:29.789 ********** 2026-04-17 00:42:09.008802 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:42:09.008809 | orchestrator | 2026-04-17 00:42:09.008816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.008837 | orchestrator | Friday 17 April 2026 00:42:05 +0000 (0:00:00.340) 0:00:30.129 ********** 2026-04-17 00:42:09.008845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-17 00:42:09.008852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-17 00:42:09.008860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-17 00:42:09.008867 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-17 00:42:09.008878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-17 00:42:09.008885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-17 00:42:09.008892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-17 00:42:09.008900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-17 00:42:09.008907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-17 00:42:09.008914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-17 00:42:09.008922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-17 00:42:09.008930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-17 00:42:09.008937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-17 00:42:09.008946 | orchestrator | 2026-04-17 00:42:09.008956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.008966 | orchestrator | Friday 17 April 2026 00:42:05 +0000 (0:00:00.361) 0:00:30.491 ********** 2026-04-17 00:42:09.008976 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.008986 | orchestrator | 2026-04-17 00:42:09.008996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009007 | orchestrator | Friday 17 April 2026 00:42:05 +0000 (0:00:00.212) 0:00:30.703 ********** 2026-04-17 00:42:09.009017 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009026 | orchestrator | 2026-04-17 00:42:09.009035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009044 | orchestrator | Friday 17 April 2026 00:42:05 +0000 (0:00:00.210) 0:00:30.914 ********** 2026-04-17 00:42:09.009053 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009063 | orchestrator | 2026-04-17 00:42:09.009074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009090 | orchestrator | Friday 17 April 2026 00:42:06 +0000 (0:00:00.186) 0:00:31.100 ********** 2026-04-17 00:42:09.009101 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009111 | orchestrator | 2026-04-17 00:42:09.009120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009130 | orchestrator | Friday 17 April 2026 00:42:06 +0000 (0:00:00.171) 0:00:31.272 ********** 2026-04-17 00:42:09.009140 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009149 | orchestrator | 2026-04-17 00:42:09.009158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009167 | orchestrator | Friday 17 April 2026 00:42:06 +0000 (0:00:00.166) 0:00:31.438 ********** 2026-04-17 00:42:09.009176 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009186 | orchestrator | 2026-04-17 00:42:09.009196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009206 | orchestrator | Friday 17 April 2026 00:42:06 +0000 (0:00:00.502) 0:00:31.941 ********** 2026-04-17 00:42:09.009215 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009224 | orchestrator | 2026-04-17 00:42:09.009234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009243 | orchestrator | Friday 17 April 2026 00:42:07 +0000 (0:00:00.187) 0:00:32.129 ********** 2026-04-17 00:42:09.009252 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009261 | orchestrator | 2026-04-17 00:42:09.009271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009280 | orchestrator | Friday 17 April 2026 00:42:07 +0000 (0:00:00.232) 0:00:32.361 ********** 2026-04-17 00:42:09.009289 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-17 00:42:09.009338 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-17 00:42:09.009351 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-17 00:42:09.009361 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-17 00:42:09.009371 | orchestrator | 2026-04-17 00:42:09.009383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009394 | orchestrator | Friday 17 April 2026 00:42:08 +0000 (0:00:00.833) 0:00:33.195 ********** 2026-04-17 00:42:09.009404 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009414 | orchestrator | 2026-04-17 00:42:09.009426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009437 | orchestrator | Friday 17 April 2026 00:42:08 +0000 (0:00:00.212) 0:00:33.407 ********** 2026-04-17 00:42:09.009447 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009458 | orchestrator | 2026-04-17 00:42:09.009469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009480 | orchestrator | Friday 17 April 2026 00:42:08 +0000 (0:00:00.195) 0:00:33.603 ********** 2026-04-17 00:42:09.009508 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009519 | orchestrator | 2026-04-17 00:42:09.009529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:42:09.009539 | orchestrator | Friday 17 April 2026 00:42:08 +0000 (0:00:00.242) 0:00:33.845 ********** 2026-04-17 00:42:09.009550 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:09.009559 | orchestrator | 2026-04-17 00:42:09.009583 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-17 00:42:12.572107 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.210) 0:00:34.055 ********** 2026-04-17 00:42:12.572199 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-17 00:42:12.572211 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-17 00:42:12.572221 | orchestrator | 2026-04-17 00:42:12.572231 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-17 00:42:12.572240 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.173) 0:00:34.229 ********** 2026-04-17 00:42:12.572249 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572258 | orchestrator | 2026-04-17 00:42:12.572267 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-17 00:42:12.572276 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.129) 0:00:34.359 ********** 2026-04-17 00:42:12.572285 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572293 | orchestrator | 2026-04-17 00:42:12.572302 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-17 00:42:12.572310 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.125) 0:00:34.484 ********** 2026-04-17 00:42:12.572319 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572328 | orchestrator | 2026-04-17 00:42:12.572337 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-17 00:42:12.572346 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.126) 0:00:34.611 ********** 2026-04-17 00:42:12.572355 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:42:12.572364 | orchestrator | 2026-04-17 00:42:12.572373 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-17 00:42:12.572381 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.264) 0:00:34.875 ********** 2026-04-17 00:42:12.572391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd097a065-5c07-563d-9f82-653f6f04c198'}}) 2026-04-17 00:42:12.572399 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '037810f1-d9a1-54dd-a4a8-d143a432af64'}}) 2026-04-17 00:42:12.572408 | orchestrator | 2026-04-17 00:42:12.572417 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-17 00:42:12.572426 | orchestrator | Friday 17 April 2026 00:42:09 +0000 (0:00:00.149) 0:00:35.024 ********** 2026-04-17 00:42:12.572435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd097a065-5c07-563d-9f82-653f6f04c198'}})  2026-04-17 00:42:12.572465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '037810f1-d9a1-54dd-a4a8-d143a432af64'}})  2026-04-17 00:42:12.572475 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572483 | orchestrator | 2026-04-17 00:42:12.572579 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-17 00:42:12.572599 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.153) 0:00:35.178 ********** 2026-04-17 00:42:12.572618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd097a065-5c07-563d-9f82-653f6f04c198'}})  2026-04-17 00:42:12.572632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '037810f1-d9a1-54dd-a4a8-d143a432af64'}})  2026-04-17 00:42:12.572646 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572659 | orchestrator | 2026-04-17 00:42:12.572673 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-17 00:42:12.572687 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.130) 0:00:35.308 ********** 2026-04-17 00:42:12.572701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd097a065-5c07-563d-9f82-653f6f04c198'}})  2026-04-17 00:42:12.572715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '037810f1-d9a1-54dd-a4a8-d143a432af64'}})  2026-04-17 00:42:12.572729 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572743 | orchestrator | 2026-04-17 00:42:12.572758 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-17 00:42:12.572773 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.153) 0:00:35.462 ********** 2026-04-17 00:42:12.572787 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:42:12.572802 | orchestrator | 2026-04-17 00:42:12.572818 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-17 00:42:12.572833 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.151) 0:00:35.614 ********** 2026-04-17 00:42:12.572849 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:42:12.572865 | orchestrator | 2026-04-17 00:42:12.572881 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-17 00:42:12.572897 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.112) 0:00:35.727 ********** 2026-04-17 00:42:12.572912 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572925 | orchestrator | 2026-04-17 00:42:12.572939 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-17 00:42:12.572953 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.138) 0:00:35.865 ********** 2026-04-17 00:42:12.572968 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.572982 | orchestrator | 2026-04-17 00:42:12.572995 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-17 00:42:12.573009 | orchestrator | Friday 17 April 2026 00:42:10 +0000 (0:00:00.113) 0:00:35.979 ********** 2026-04-17 00:42:12.573024 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.573038 | orchestrator | 2026-04-17 00:42:12.573053 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-17 00:42:12.573068 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.117) 0:00:36.096 ********** 2026-04-17 00:42:12.573084 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:42:12.573099 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:42:12.573115 | orchestrator |  "sdb": { 2026-04-17 00:42:12.573155 | orchestrator |  "osd_lvm_uuid": "d097a065-5c07-563d-9f82-653f6f04c198" 2026-04-17 00:42:12.573172 | orchestrator |  }, 2026-04-17 00:42:12.573186 | orchestrator |  "sdc": { 2026-04-17 00:42:12.573221 | orchestrator |  "osd_lvm_uuid": "037810f1-d9a1-54dd-a4a8-d143a432af64" 2026-04-17 00:42:12.573237 | orchestrator |  } 2026-04-17 00:42:12.573251 | orchestrator |  } 2026-04-17 00:42:12.573266 | orchestrator | } 2026-04-17 00:42:12.573280 | orchestrator | 2026-04-17 00:42:12.573310 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-17 00:42:12.573325 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.130) 0:00:36.226 ********** 2026-04-17 00:42:12.573340 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.573355 | orchestrator | 2026-04-17 00:42:12.573369 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-17 00:42:12.573385 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.096) 0:00:36.323 ********** 2026-04-17 00:42:12.573401 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.573417 | orchestrator | 2026-04-17 00:42:12.573431 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-17 00:42:12.573445 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.271) 0:00:36.594 ********** 2026-04-17 00:42:12.573461 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:42:12.573474 | orchestrator | 2026-04-17 00:42:12.573517 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-17 00:42:12.573532 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.108) 0:00:36.703 ********** 2026-04-17 00:42:12.573547 | orchestrator | changed: [testbed-node-5] => { 2026-04-17 00:42:12.573562 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-17 00:42:12.573576 | orchestrator |  "ceph_osd_devices": { 2026-04-17 00:42:12.573591 | orchestrator |  "sdb": { 2026-04-17 00:42:12.573606 | orchestrator |  "osd_lvm_uuid": "d097a065-5c07-563d-9f82-653f6f04c198" 2026-04-17 00:42:12.573621 | orchestrator |  }, 2026-04-17 00:42:12.573636 | orchestrator |  "sdc": { 2026-04-17 00:42:12.573671 | orchestrator |  "osd_lvm_uuid": "037810f1-d9a1-54dd-a4a8-d143a432af64" 2026-04-17 00:42:12.573687 | orchestrator |  } 2026-04-17 00:42:12.573703 | orchestrator |  }, 2026-04-17 00:42:12.573719 | orchestrator |  "lvm_volumes": [ 2026-04-17 00:42:12.573733 | orchestrator |  { 2026-04-17 00:42:12.573749 | orchestrator |  "data": "osd-block-d097a065-5c07-563d-9f82-653f6f04c198", 2026-04-17 00:42:12.573764 | orchestrator |  "data_vg": "ceph-d097a065-5c07-563d-9f82-653f6f04c198" 2026-04-17 00:42:12.573778 | orchestrator |  }, 2026-04-17 00:42:12.573798 | orchestrator |  { 2026-04-17 00:42:12.573814 | orchestrator |  "data": "osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64", 2026-04-17 00:42:12.573829 | orchestrator |  "data_vg": "ceph-037810f1-d9a1-54dd-a4a8-d143a432af64" 2026-04-17 00:42:12.573844 | orchestrator |  } 2026-04-17 00:42:12.573858 | orchestrator |  ] 2026-04-17 00:42:12.573873 | orchestrator |  } 2026-04-17 00:42:12.573888 | orchestrator | } 2026-04-17 00:42:12.573904 | orchestrator | 2026-04-17 00:42:12.573920 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-17 00:42:12.573935 | orchestrator | Friday 17 April 2026 00:42:11 +0000 (0:00:00.193) 0:00:36.897 ********** 2026-04-17 00:42:12.573951 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 00:42:12.573967 | orchestrator | 2026-04-17 00:42:12.573983 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:42:12.573997 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 00:42:12.574012 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 00:42:12.574110 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 00:42:12.574124 | orchestrator | 2026-04-17 00:42:12.574139 | orchestrator | 2026-04-17 00:42:12.574154 | orchestrator | 2026-04-17 00:42:12.574169 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:42:12.574185 | orchestrator | Friday 17 April 2026 00:42:12 +0000 (0:00:00.712) 0:00:37.609 ********** 2026-04-17 00:42:12.574214 | orchestrator | =============================================================================== 2026-04-17 00:42:12.574231 | orchestrator | Write configuration file ------------------------------------------------ 3.14s 2026-04-17 00:42:12.574246 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2026-04-17 00:42:12.574261 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-04-17 00:42:12.574275 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2026-04-17 00:42:12.574289 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2026-04-17 00:42:12.574304 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-04-17 00:42:12.574319 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-04-17 00:42:12.574332 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-04-17 00:42:12.574347 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-04-17 00:42:12.574362 | orchestrator | Print configuration data ------------------------------------------------ 0.68s 2026-04-17 00:42:12.574376 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2026-04-17 00:42:12.574391 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-04-17 00:42:12.574404 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-04-17 00:42:12.574433 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-04-17 00:42:12.782247 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-04-17 00:42:12.782357 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.56s 2026-04-17 00:42:12.782375 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-04-17 00:42:12.782389 | orchestrator | Print DB devices -------------------------------------------------------- 0.53s 2026-04-17 00:42:12.782404 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.53s 2026-04-17 00:42:12.782418 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-04-17 00:42:34.388278 | orchestrator | 2026-04-17 00:42:34 | INFO  | Task d67022c6-0175-4eeb-89b8-1367a31e2cba (sync inventory) is running in background. Output coming soon. 2026-04-17 00:43:00.534512 | orchestrator | 2026-04-17 00:42:35 | INFO  | Starting group_vars file reorganization 2026-04-17 00:43:00.534635 | orchestrator | 2026-04-17 00:42:35 | INFO  | Moved 0 file(s) to their respective directories 2026-04-17 00:43:00.534652 | orchestrator | 2026-04-17 00:42:35 | INFO  | Group_vars file reorganization completed 2026-04-17 00:43:00.534664 | orchestrator | 2026-04-17 00:42:38 | INFO  | Starting variable preparation from inventory 2026-04-17 00:43:00.534676 | orchestrator | 2026-04-17 00:42:40 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-17 00:43:00.534687 | orchestrator | 2026-04-17 00:42:40 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-17 00:43:00.534698 | orchestrator | 2026-04-17 00:42:40 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-17 00:43:00.534710 | orchestrator | 2026-04-17 00:42:40 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-17 00:43:00.534720 | orchestrator | 2026-04-17 00:42:40 | INFO  | Variable preparation completed 2026-04-17 00:43:00.534731 | orchestrator | 2026-04-17 00:42:41 | INFO  | Starting inventory overwrite handling 2026-04-17 00:43:00.534742 | orchestrator | 2026-04-17 00:42:41 | INFO  | Handling group overwrites in 99-overwrite 2026-04-17 00:43:00.534753 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removing group frr:children from 60-generic 2026-04-17 00:43:00.534791 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-17 00:43:00.534803 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-17 00:43:00.534814 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-17 00:43:00.534824 | orchestrator | 2026-04-17 00:42:41 | INFO  | Handling group overwrites in 20-roles 2026-04-17 00:43:00.534835 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-17 00:43:00.534846 | orchestrator | 2026-04-17 00:42:41 | INFO  | Removed 5 group(s) in total 2026-04-17 00:43:00.534856 | orchestrator | 2026-04-17 00:42:41 | INFO  | Inventory overwrite handling completed 2026-04-17 00:43:00.534867 | orchestrator | 2026-04-17 00:42:43 | INFO  | Starting merge of inventory files 2026-04-17 00:43:00.534877 | orchestrator | 2026-04-17 00:42:43 | INFO  | Inventory files merged successfully 2026-04-17 00:43:00.534888 | orchestrator | 2026-04-17 00:42:47 | INFO  | Generating minified hosts file 2026-04-17 00:43:00.534899 | orchestrator | 2026-04-17 00:42:48 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-17 00:43:00.534911 | orchestrator | 2026-04-17 00:42:48 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-17 00:43:00.534940 | orchestrator | 2026-04-17 00:42:49 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-17 00:43:00.534951 | orchestrator | 2026-04-17 00:42:59 | INFO  | Successfully wrote ClusterShell configuration 2026-04-17 00:43:00.534963 | orchestrator | [master f86749e] 2026-04-17-00-43 2026-04-17 00:43:00.534975 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-17 00:43:00.534988 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-17 00:43:00.535002 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-17 00:43:00.535014 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-17 00:43:01.781956 | orchestrator | 2026-04-17 00:43:01 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-17 00:43:01.844578 | orchestrator | 2026-04-17 00:43:01 | INFO  | Task bd2c6953-8e15-4e07-9abe-7a3b682e3212 (ceph-create-lvm-devices) was prepared for execution. 2026-04-17 00:43:01.844881 | orchestrator | 2026-04-17 00:43:01 | INFO  | It takes a moment until task bd2c6953-8e15-4e07-9abe-7a3b682e3212 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-17 00:43:12.452052 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 00:43:12.452162 | orchestrator | 2.16.14 2026-04-17 00:43:12.452177 | orchestrator | 2026-04-17 00:43:12.452187 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 00:43:12.452196 | orchestrator | 2026-04-17 00:43:12.452205 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:43:12.452213 | orchestrator | Friday 17 April 2026 00:43:05 +0000 (0:00:00.208) 0:00:00.208 ********** 2026-04-17 00:43:12.452222 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 00:43:12.452230 | orchestrator | 2026-04-17 00:43:12.452238 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:43:12.452246 | orchestrator | Friday 17 April 2026 00:43:06 +0000 (0:00:00.206) 0:00:00.415 ********** 2026-04-17 00:43:12.452254 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:12.452262 | orchestrator | 2026-04-17 00:43:12.452270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452278 | orchestrator | Friday 17 April 2026 00:43:06 +0000 (0:00:00.206) 0:00:00.622 ********** 2026-04-17 00:43:12.452305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-17 00:43:12.452313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-17 00:43:12.452321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-17 00:43:12.452329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-17 00:43:12.452337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-17 00:43:12.452357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-17 00:43:12.452365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-17 00:43:12.452373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-17 00:43:12.452381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-17 00:43:12.452388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-17 00:43:12.452396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-17 00:43:12.452404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-17 00:43:12.452411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-17 00:43:12.452461 | orchestrator | 2026-04-17 00:43:12.452470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452478 | orchestrator | Friday 17 April 2026 00:43:06 +0000 (0:00:00.343) 0:00:00.965 ********** 2026-04-17 00:43:12.452485 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452493 | orchestrator | 2026-04-17 00:43:12.452501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452509 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.335) 0:00:01.301 ********** 2026-04-17 00:43:12.452516 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452524 | orchestrator | 2026-04-17 00:43:12.452532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452539 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.164) 0:00:01.465 ********** 2026-04-17 00:43:12.452547 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452554 | orchestrator | 2026-04-17 00:43:12.452562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452570 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.171) 0:00:01.637 ********** 2026-04-17 00:43:12.452579 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452588 | orchestrator | 2026-04-17 00:43:12.452597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452606 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.174) 0:00:01.811 ********** 2026-04-17 00:43:12.452615 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452624 | orchestrator | 2026-04-17 00:43:12.452633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452642 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.175) 0:00:01.987 ********** 2026-04-17 00:43:12.452650 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452659 | orchestrator | 2026-04-17 00:43:12.452668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452678 | orchestrator | Friday 17 April 2026 00:43:07 +0000 (0:00:00.171) 0:00:02.158 ********** 2026-04-17 00:43:12.452687 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452696 | orchestrator | 2026-04-17 00:43:12.452706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452715 | orchestrator | Friday 17 April 2026 00:43:08 +0000 (0:00:00.156) 0:00:02.315 ********** 2026-04-17 00:43:12.452724 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.452739 | orchestrator | 2026-04-17 00:43:12.452748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452757 | orchestrator | Friday 17 April 2026 00:43:08 +0000 (0:00:00.174) 0:00:02.489 ********** 2026-04-17 00:43:12.452766 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356) 2026-04-17 00:43:12.452777 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356) 2026-04-17 00:43:12.452786 | orchestrator | 2026-04-17 00:43:12.452795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452819 | orchestrator | Friday 17 April 2026 00:43:08 +0000 (0:00:00.348) 0:00:02.838 ********** 2026-04-17 00:43:12.452828 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7) 2026-04-17 00:43:12.452838 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7) 2026-04-17 00:43:12.452847 | orchestrator | 2026-04-17 00:43:12.452855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452865 | orchestrator | Friday 17 April 2026 00:43:09 +0000 (0:00:00.430) 0:00:03.269 ********** 2026-04-17 00:43:12.452874 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c) 2026-04-17 00:43:12.452883 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c) 2026-04-17 00:43:12.452893 | orchestrator | 2026-04-17 00:43:12.452901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452910 | orchestrator | Friday 17 April 2026 00:43:09 +0000 (0:00:00.527) 0:00:03.796 ********** 2026-04-17 00:43:12.452919 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9) 2026-04-17 00:43:12.452927 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9) 2026-04-17 00:43:12.452936 | orchestrator | 2026-04-17 00:43:12.452944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:12.452952 | orchestrator | Friday 17 April 2026 00:43:10 +0000 (0:00:00.694) 0:00:04.491 ********** 2026-04-17 00:43:12.452959 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:43:12.452967 | orchestrator | 2026-04-17 00:43:12.452975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.452983 | orchestrator | Friday 17 April 2026 00:43:10 +0000 (0:00:00.346) 0:00:04.837 ********** 2026-04-17 00:43:12.452991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-17 00:43:12.452999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-17 00:43:12.453007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-17 00:43:12.453014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-17 00:43:12.453022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-17 00:43:12.453030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-17 00:43:12.453037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-17 00:43:12.453045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-17 00:43:12.453053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-17 00:43:12.453060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-17 00:43:12.453068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-17 00:43:12.453075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-17 00:43:12.453089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-17 00:43:12.453097 | orchestrator | 2026-04-17 00:43:12.453105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453112 | orchestrator | Friday 17 April 2026 00:43:11 +0000 (0:00:00.445) 0:00:05.283 ********** 2026-04-17 00:43:12.453120 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453128 | orchestrator | 2026-04-17 00:43:12.453135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453143 | orchestrator | Friday 17 April 2026 00:43:11 +0000 (0:00:00.188) 0:00:05.471 ********** 2026-04-17 00:43:12.453151 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453159 | orchestrator | 2026-04-17 00:43:12.453174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453182 | orchestrator | Friday 17 April 2026 00:43:11 +0000 (0:00:00.239) 0:00:05.710 ********** 2026-04-17 00:43:12.453190 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453197 | orchestrator | 2026-04-17 00:43:12.453205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453213 | orchestrator | Friday 17 April 2026 00:43:11 +0000 (0:00:00.206) 0:00:05.917 ********** 2026-04-17 00:43:12.453221 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453229 | orchestrator | 2026-04-17 00:43:12.453236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453244 | orchestrator | Friday 17 April 2026 00:43:11 +0000 (0:00:00.200) 0:00:06.118 ********** 2026-04-17 00:43:12.453252 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453259 | orchestrator | 2026-04-17 00:43:12.453267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453275 | orchestrator | Friday 17 April 2026 00:43:12 +0000 (0:00:00.175) 0:00:06.293 ********** 2026-04-17 00:43:12.453282 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453290 | orchestrator | 2026-04-17 00:43:12.453298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:12.453306 | orchestrator | Friday 17 April 2026 00:43:12 +0000 (0:00:00.203) 0:00:06.497 ********** 2026-04-17 00:43:12.453313 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:12.453321 | orchestrator | 2026-04-17 00:43:12.453333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.081778 | orchestrator | Friday 17 April 2026 00:43:12 +0000 (0:00:00.179) 0:00:06.676 ********** 2026-04-17 00:43:20.081875 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.081886 | orchestrator | 2026-04-17 00:43:20.081894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.081902 | orchestrator | Friday 17 April 2026 00:43:12 +0000 (0:00:00.180) 0:00:06.857 ********** 2026-04-17 00:43:20.081910 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-17 00:43:20.081918 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-17 00:43:20.081926 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-17 00:43:20.081933 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-17 00:43:20.081941 | orchestrator | 2026-04-17 00:43:20.081949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.081956 | orchestrator | Friday 17 April 2026 00:43:13 +0000 (0:00:00.888) 0:00:07.745 ********** 2026-04-17 00:43:20.081963 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.081971 | orchestrator | 2026-04-17 00:43:20.081978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.081986 | orchestrator | Friday 17 April 2026 00:43:13 +0000 (0:00:00.181) 0:00:07.927 ********** 2026-04-17 00:43:20.081993 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082000 | orchestrator | 2026-04-17 00:43:20.082008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.082098 | orchestrator | Friday 17 April 2026 00:43:13 +0000 (0:00:00.188) 0:00:08.115 ********** 2026-04-17 00:43:20.082108 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082116 | orchestrator | 2026-04-17 00:43:20.082123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:20.082130 | orchestrator | Friday 17 April 2026 00:43:14 +0000 (0:00:00.179) 0:00:08.294 ********** 2026-04-17 00:43:20.082138 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082145 | orchestrator | 2026-04-17 00:43:20.082192 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 00:43:20.082201 | orchestrator | Friday 17 April 2026 00:43:14 +0000 (0:00:00.182) 0:00:08.477 ********** 2026-04-17 00:43:20.082208 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082215 | orchestrator | 2026-04-17 00:43:20.082223 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 00:43:20.082230 | orchestrator | Friday 17 April 2026 00:43:14 +0000 (0:00:00.128) 0:00:08.605 ********** 2026-04-17 00:43:20.082237 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}}) 2026-04-17 00:43:20.082245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}}) 2026-04-17 00:43:20.082252 | orchestrator | 2026-04-17 00:43:20.082260 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 00:43:20.082267 | orchestrator | Friday 17 April 2026 00:43:14 +0000 (0:00:00.173) 0:00:08.779 ********** 2026-04-17 00:43:20.082275 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}) 2026-04-17 00:43:20.082284 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}) 2026-04-17 00:43:20.082291 | orchestrator | 2026-04-17 00:43:20.082299 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 00:43:20.082307 | orchestrator | Friday 17 April 2026 00:43:16 +0000 (0:00:01.987) 0:00:10.767 ********** 2026-04-17 00:43:20.082314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.082323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.082330 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082339 | orchestrator | 2026-04-17 00:43:20.082351 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 00:43:20.082363 | orchestrator | Friday 17 April 2026 00:43:16 +0000 (0:00:00.134) 0:00:10.901 ********** 2026-04-17 00:43:20.082375 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}) 2026-04-17 00:43:20.082387 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}) 2026-04-17 00:43:20.082401 | orchestrator | 2026-04-17 00:43:20.082437 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 00:43:20.082451 | orchestrator | Friday 17 April 2026 00:43:18 +0000 (0:00:01.510) 0:00:12.412 ********** 2026-04-17 00:43:20.082462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.082474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.082486 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082498 | orchestrator | 2026-04-17 00:43:20.082509 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 00:43:20.082531 | orchestrator | Friday 17 April 2026 00:43:18 +0000 (0:00:00.144) 0:00:12.557 ********** 2026-04-17 00:43:20.082564 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082577 | orchestrator | 2026-04-17 00:43:20.082590 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 00:43:20.082603 | orchestrator | Friday 17 April 2026 00:43:18 +0000 (0:00:00.169) 0:00:12.727 ********** 2026-04-17 00:43:20.082615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.082628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.082640 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082653 | orchestrator | 2026-04-17 00:43:20.082666 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 00:43:20.082679 | orchestrator | Friday 17 April 2026 00:43:18 +0000 (0:00:00.361) 0:00:13.088 ********** 2026-04-17 00:43:20.082691 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082703 | orchestrator | 2026-04-17 00:43:20.082715 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 00:43:20.082727 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.141) 0:00:13.230 ********** 2026-04-17 00:43:20.082744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.082761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.082778 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082796 | orchestrator | 2026-04-17 00:43:20.082814 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 00:43:20.082831 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.160) 0:00:13.391 ********** 2026-04-17 00:43:20.082865 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082883 | orchestrator | 2026-04-17 00:43:20.082901 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 00:43:20.082918 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.131) 0:00:13.523 ********** 2026-04-17 00:43:20.082937 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.082956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.082974 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.082992 | orchestrator | 2026-04-17 00:43:20.083011 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 00:43:20.083029 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.135) 0:00:13.658 ********** 2026-04-17 00:43:20.083047 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:20.083065 | orchestrator | 2026-04-17 00:43:20.083082 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 00:43:20.083099 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.112) 0:00:13.770 ********** 2026-04-17 00:43:20.083117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.083135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.083153 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.083172 | orchestrator | 2026-04-17 00:43:20.083189 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 00:43:20.083221 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.131) 0:00:13.902 ********** 2026-04-17 00:43:20.083241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.083258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.083276 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.083293 | orchestrator | 2026-04-17 00:43:20.083309 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 00:43:20.083325 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.132) 0:00:14.035 ********** 2026-04-17 00:43:20.083341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:20.083359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:20.083378 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.083394 | orchestrator | 2026-04-17 00:43:20.083410 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 00:43:20.083533 | orchestrator | Friday 17 April 2026 00:43:19 +0000 (0:00:00.154) 0:00:14.190 ********** 2026-04-17 00:43:20.083568 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:20.083586 | orchestrator | 2026-04-17 00:43:20.083603 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 00:43:20.083642 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.111) 0:00:14.301 ********** 2026-04-17 00:43:25.491548 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.491635 | orchestrator | 2026-04-17 00:43:25.491646 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 00:43:25.491655 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.130) 0:00:14.432 ********** 2026-04-17 00:43:25.491662 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.491669 | orchestrator | 2026-04-17 00:43:25.491676 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 00:43:25.491683 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.103) 0:00:14.536 ********** 2026-04-17 00:43:25.491690 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:43:25.491698 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 00:43:25.491705 | orchestrator | } 2026-04-17 00:43:25.491723 | orchestrator | 2026-04-17 00:43:25.491739 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 00:43:25.491746 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.258) 0:00:14.794 ********** 2026-04-17 00:43:25.491753 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:43:25.491760 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 00:43:25.491767 | orchestrator | } 2026-04-17 00:43:25.491773 | orchestrator | 2026-04-17 00:43:25.491780 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 00:43:25.491787 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.137) 0:00:14.932 ********** 2026-04-17 00:43:25.491794 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:43:25.491801 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 00:43:25.491808 | orchestrator | } 2026-04-17 00:43:25.491815 | orchestrator | 2026-04-17 00:43:25.491821 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 00:43:25.491828 | orchestrator | Friday 17 April 2026 00:43:20 +0000 (0:00:00.117) 0:00:15.049 ********** 2026-04-17 00:43:25.491835 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:25.491842 | orchestrator | 2026-04-17 00:43:25.491862 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 00:43:25.491870 | orchestrator | Friday 17 April 2026 00:43:21 +0000 (0:00:00.604) 0:00:15.654 ********** 2026-04-17 00:43:25.491894 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:25.491901 | orchestrator | 2026-04-17 00:43:25.491908 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 00:43:25.491915 | orchestrator | Friday 17 April 2026 00:43:21 +0000 (0:00:00.482) 0:00:16.136 ********** 2026-04-17 00:43:25.491921 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:25.491928 | orchestrator | 2026-04-17 00:43:25.491935 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 00:43:25.491941 | orchestrator | Friday 17 April 2026 00:43:22 +0000 (0:00:00.493) 0:00:16.629 ********** 2026-04-17 00:43:25.491948 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:25.491955 | orchestrator | 2026-04-17 00:43:25.491961 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 00:43:25.491968 | orchestrator | Friday 17 April 2026 00:43:22 +0000 (0:00:00.142) 0:00:16.771 ********** 2026-04-17 00:43:25.491975 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.491981 | orchestrator | 2026-04-17 00:43:25.491988 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 00:43:25.491995 | orchestrator | Friday 17 April 2026 00:43:22 +0000 (0:00:00.116) 0:00:16.888 ********** 2026-04-17 00:43:25.492001 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492008 | orchestrator | 2026-04-17 00:43:25.492015 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 00:43:25.492021 | orchestrator | Friday 17 April 2026 00:43:22 +0000 (0:00:00.096) 0:00:16.984 ********** 2026-04-17 00:43:25.492028 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:43:25.492035 | orchestrator |  "vgs_report": { 2026-04-17 00:43:25.492041 | orchestrator |  "vg": [] 2026-04-17 00:43:25.492048 | orchestrator |  } 2026-04-17 00:43:25.492055 | orchestrator | } 2026-04-17 00:43:25.492061 | orchestrator | 2026-04-17 00:43:25.492069 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 00:43:25.492076 | orchestrator | Friday 17 April 2026 00:43:22 +0000 (0:00:00.131) 0:00:17.115 ********** 2026-04-17 00:43:25.492084 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492091 | orchestrator | 2026-04-17 00:43:25.492099 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 00:43:25.492107 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.121) 0:00:17.237 ********** 2026-04-17 00:43:25.492114 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492122 | orchestrator | 2026-04-17 00:43:25.492129 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 00:43:25.492137 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.113) 0:00:17.351 ********** 2026-04-17 00:43:25.492144 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492152 | orchestrator | 2026-04-17 00:43:25.492159 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 00:43:25.492166 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.246) 0:00:17.597 ********** 2026-04-17 00:43:25.492173 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492179 | orchestrator | 2026-04-17 00:43:25.492186 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 00:43:25.492192 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.138) 0:00:17.735 ********** 2026-04-17 00:43:25.492199 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492206 | orchestrator | 2026-04-17 00:43:25.492212 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 00:43:25.492219 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.135) 0:00:17.871 ********** 2026-04-17 00:43:25.492226 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492232 | orchestrator | 2026-04-17 00:43:25.492239 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 00:43:25.492246 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.115) 0:00:17.986 ********** 2026-04-17 00:43:25.492253 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492265 | orchestrator | 2026-04-17 00:43:25.492272 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 00:43:25.492278 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.107) 0:00:18.094 ********** 2026-04-17 00:43:25.492297 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492305 | orchestrator | 2026-04-17 00:43:25.492311 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 00:43:25.492318 | orchestrator | Friday 17 April 2026 00:43:23 +0000 (0:00:00.109) 0:00:18.203 ********** 2026-04-17 00:43:25.492325 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492331 | orchestrator | 2026-04-17 00:43:25.492338 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 00:43:25.492345 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.118) 0:00:18.322 ********** 2026-04-17 00:43:25.492352 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492358 | orchestrator | 2026-04-17 00:43:25.492365 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 00:43:25.492372 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.119) 0:00:18.441 ********** 2026-04-17 00:43:25.492378 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492385 | orchestrator | 2026-04-17 00:43:25.492392 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 00:43:25.492398 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.127) 0:00:18.569 ********** 2026-04-17 00:43:25.492424 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492431 | orchestrator | 2026-04-17 00:43:25.492438 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 00:43:25.492444 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.122) 0:00:18.691 ********** 2026-04-17 00:43:25.492451 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492458 | orchestrator | 2026-04-17 00:43:25.492464 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 00:43:25.492471 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.120) 0:00:18.811 ********** 2026-04-17 00:43:25.492477 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492484 | orchestrator | 2026-04-17 00:43:25.492495 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 00:43:25.492502 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.125) 0:00:18.937 ********** 2026-04-17 00:43:25.492510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:25.492517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:25.492524 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492531 | orchestrator | 2026-04-17 00:43:25.492538 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 00:43:25.492544 | orchestrator | Friday 17 April 2026 00:43:24 +0000 (0:00:00.277) 0:00:19.214 ********** 2026-04-17 00:43:25.492551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:25.492558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:25.492565 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492571 | orchestrator | 2026-04-17 00:43:25.492578 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 00:43:25.492585 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.129) 0:00:19.344 ********** 2026-04-17 00:43:25.492591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:25.492598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:25.492610 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492617 | orchestrator | 2026-04-17 00:43:25.492623 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 00:43:25.492630 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.101) 0:00:19.446 ********** 2026-04-17 00:43:25.492637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:25.492644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:25.492650 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492657 | orchestrator | 2026-04-17 00:43:25.492664 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 00:43:25.492670 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.106) 0:00:19.553 ********** 2026-04-17 00:43:25.492677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:25.492684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:25.492690 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:25.492697 | orchestrator | 2026-04-17 00:43:25.492704 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 00:43:25.492710 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.125) 0:00:19.678 ********** 2026-04-17 00:43:25.492721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979218 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979231 | orchestrator | 2026-04-17 00:43:29.979239 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 00:43:29.979248 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.107) 0:00:19.785 ********** 2026-04-17 00:43:29.979307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979326 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979333 | orchestrator | 2026-04-17 00:43:29.979340 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 00:43:29.979347 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.109) 0:00:19.895 ********** 2026-04-17 00:43:29.979355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979369 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979375 | orchestrator | 2026-04-17 00:43:29.979382 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 00:43:29.979388 | orchestrator | Friday 17 April 2026 00:43:25 +0000 (0:00:00.101) 0:00:19.996 ********** 2026-04-17 00:43:29.979395 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:29.979456 | orchestrator | 2026-04-17 00:43:29.979485 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 00:43:29.979493 | orchestrator | Friday 17 April 2026 00:43:26 +0000 (0:00:00.486) 0:00:20.483 ********** 2026-04-17 00:43:29.979499 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:29.979505 | orchestrator | 2026-04-17 00:43:29.979512 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 00:43:29.979536 | orchestrator | Friday 17 April 2026 00:43:26 +0000 (0:00:00.494) 0:00:20.977 ********** 2026-04-17 00:43:29.979543 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:43:29.979549 | orchestrator | 2026-04-17 00:43:29.979557 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 00:43:29.979563 | orchestrator | Friday 17 April 2026 00:43:26 +0000 (0:00:00.133) 0:00:21.111 ********** 2026-04-17 00:43:29.979570 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'vg_name': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}) 2026-04-17 00:43:29.979579 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'vg_name': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}) 2026-04-17 00:43:29.979588 | orchestrator | 2026-04-17 00:43:29.979595 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 00:43:29.979602 | orchestrator | Friday 17 April 2026 00:43:27 +0000 (0:00:00.158) 0:00:21.269 ********** 2026-04-17 00:43:29.979609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979623 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979630 | orchestrator | 2026-04-17 00:43:29.979637 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 00:43:29.979643 | orchestrator | Friday 17 April 2026 00:43:27 +0000 (0:00:00.275) 0:00:21.545 ********** 2026-04-17 00:43:29.979650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979663 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979670 | orchestrator | 2026-04-17 00:43:29.979676 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 00:43:29.979683 | orchestrator | Friday 17 April 2026 00:43:27 +0000 (0:00:00.167) 0:00:21.712 ********** 2026-04-17 00:43:29.979690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'})  2026-04-17 00:43:29.979698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'})  2026-04-17 00:43:29.979705 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:43:29.979711 | orchestrator | 2026-04-17 00:43:29.979718 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 00:43:29.979725 | orchestrator | Friday 17 April 2026 00:43:27 +0000 (0:00:00.122) 0:00:21.835 ********** 2026-04-17 00:43:29.979751 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 00:43:29.979758 | orchestrator |  "lvm_report": { 2026-04-17 00:43:29.979765 | orchestrator |  "lv": [ 2026-04-17 00:43:29.979772 | orchestrator |  { 2026-04-17 00:43:29.979779 | orchestrator |  "lv_name": "osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e", 2026-04-17 00:43:29.979787 | orchestrator |  "vg_name": "ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e" 2026-04-17 00:43:29.979793 | orchestrator |  }, 2026-04-17 00:43:29.979807 | orchestrator |  { 2026-04-17 00:43:29.979814 | orchestrator |  "lv_name": "osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db", 2026-04-17 00:43:29.979820 | orchestrator |  "vg_name": "ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db" 2026-04-17 00:43:29.979827 | orchestrator |  } 2026-04-17 00:43:29.979834 | orchestrator |  ], 2026-04-17 00:43:29.979840 | orchestrator |  "pv": [ 2026-04-17 00:43:29.979847 | orchestrator |  { 2026-04-17 00:43:29.979854 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 00:43:29.979861 | orchestrator |  "vg_name": "ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e" 2026-04-17 00:43:29.979867 | orchestrator |  }, 2026-04-17 00:43:29.979874 | orchestrator |  { 2026-04-17 00:43:29.979881 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 00:43:29.979888 | orchestrator |  "vg_name": "ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db" 2026-04-17 00:43:29.979894 | orchestrator |  } 2026-04-17 00:43:29.979901 | orchestrator |  ] 2026-04-17 00:43:29.979912 | orchestrator |  } 2026-04-17 00:43:29.979918 | orchestrator | } 2026-04-17 00:43:29.979925 | orchestrator | 2026-04-17 00:43:29.979931 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 00:43:29.979938 | orchestrator | 2026-04-17 00:43:29.979945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:43:29.979957 | orchestrator | Friday 17 April 2026 00:43:27 +0000 (0:00:00.234) 0:00:22.070 ********** 2026-04-17 00:43:29.979964 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-17 00:43:29.979971 | orchestrator | 2026-04-17 00:43:29.979979 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:43:29.979986 | orchestrator | Friday 17 April 2026 00:43:28 +0000 (0:00:00.208) 0:00:22.278 ********** 2026-04-17 00:43:29.979992 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:29.979999 | orchestrator | 2026-04-17 00:43:29.980005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980012 | orchestrator | Friday 17 April 2026 00:43:28 +0000 (0:00:00.200) 0:00:22.479 ********** 2026-04-17 00:43:29.980019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-17 00:43:29.980025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-17 00:43:29.980032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-17 00:43:29.980039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-17 00:43:29.980046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-17 00:43:29.980053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-17 00:43:29.980060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-17 00:43:29.980066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-17 00:43:29.980073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-17 00:43:29.980079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-17 00:43:29.980086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-17 00:43:29.980093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-17 00:43:29.980099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-17 00:43:29.980106 | orchestrator | 2026-04-17 00:43:29.980113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980119 | orchestrator | Friday 17 April 2026 00:43:28 +0000 (0:00:00.379) 0:00:22.858 ********** 2026-04-17 00:43:29.980126 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980139 | orchestrator | 2026-04-17 00:43:29.980145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980152 | orchestrator | Friday 17 April 2026 00:43:28 +0000 (0:00:00.248) 0:00:23.106 ********** 2026-04-17 00:43:29.980158 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980165 | orchestrator | 2026-04-17 00:43:29.980172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980178 | orchestrator | Friday 17 April 2026 00:43:29 +0000 (0:00:00.145) 0:00:23.252 ********** 2026-04-17 00:43:29.980184 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980191 | orchestrator | 2026-04-17 00:43:29.980197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980204 | orchestrator | Friday 17 April 2026 00:43:29 +0000 (0:00:00.425) 0:00:23.678 ********** 2026-04-17 00:43:29.980210 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980217 | orchestrator | 2026-04-17 00:43:29.980223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980229 | orchestrator | Friday 17 April 2026 00:43:29 +0000 (0:00:00.177) 0:00:23.856 ********** 2026-04-17 00:43:29.980236 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980242 | orchestrator | 2026-04-17 00:43:29.980249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:29.980256 | orchestrator | Friday 17 April 2026 00:43:29 +0000 (0:00:00.171) 0:00:24.028 ********** 2026-04-17 00:43:29.980262 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:29.980269 | orchestrator | 2026-04-17 00:43:29.980282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745099 | orchestrator | Friday 17 April 2026 00:43:29 +0000 (0:00:00.176) 0:00:24.205 ********** 2026-04-17 00:43:39.745170 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745177 | orchestrator | 2026-04-17 00:43:39.745182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745186 | orchestrator | Friday 17 April 2026 00:43:30 +0000 (0:00:00.182) 0:00:24.387 ********** 2026-04-17 00:43:39.745190 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745194 | orchestrator | 2026-04-17 00:43:39.745199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745203 | orchestrator | Friday 17 April 2026 00:43:30 +0000 (0:00:00.176) 0:00:24.563 ********** 2026-04-17 00:43:39.745207 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd) 2026-04-17 00:43:39.745212 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd) 2026-04-17 00:43:39.745216 | orchestrator | 2026-04-17 00:43:39.745219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745223 | orchestrator | Friday 17 April 2026 00:43:30 +0000 (0:00:00.384) 0:00:24.947 ********** 2026-04-17 00:43:39.745227 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20) 2026-04-17 00:43:39.745231 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20) 2026-04-17 00:43:39.745234 | orchestrator | 2026-04-17 00:43:39.745239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745242 | orchestrator | Friday 17 April 2026 00:43:31 +0000 (0:00:00.477) 0:00:25.425 ********** 2026-04-17 00:43:39.745246 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128) 2026-04-17 00:43:39.745250 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128) 2026-04-17 00:43:39.745254 | orchestrator | 2026-04-17 00:43:39.745258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745262 | orchestrator | Friday 17 April 2026 00:43:31 +0000 (0:00:00.401) 0:00:25.827 ********** 2026-04-17 00:43:39.745265 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe) 2026-04-17 00:43:39.745283 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe) 2026-04-17 00:43:39.745287 | orchestrator | 2026-04-17 00:43:39.745291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:39.745294 | orchestrator | Friday 17 April 2026 00:43:31 +0000 (0:00:00.392) 0:00:26.219 ********** 2026-04-17 00:43:39.745298 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:43:39.745302 | orchestrator | 2026-04-17 00:43:39.745306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745310 | orchestrator | Friday 17 April 2026 00:43:32 +0000 (0:00:00.262) 0:00:26.482 ********** 2026-04-17 00:43:39.745313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-17 00:43:39.745317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-17 00:43:39.745321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-17 00:43:39.745325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-17 00:43:39.745328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-17 00:43:39.745332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-17 00:43:39.745336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-17 00:43:39.745340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-17 00:43:39.745344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-17 00:43:39.745348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-17 00:43:39.745351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-17 00:43:39.745355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-17 00:43:39.745359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-17 00:43:39.745362 | orchestrator | 2026-04-17 00:43:39.745366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745370 | orchestrator | Friday 17 April 2026 00:43:32 +0000 (0:00:00.512) 0:00:26.995 ********** 2026-04-17 00:43:39.745374 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745377 | orchestrator | 2026-04-17 00:43:39.745381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745385 | orchestrator | Friday 17 April 2026 00:43:32 +0000 (0:00:00.182) 0:00:27.178 ********** 2026-04-17 00:43:39.745388 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745439 | orchestrator | 2026-04-17 00:43:39.745444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745448 | orchestrator | Friday 17 April 2026 00:43:33 +0000 (0:00:00.201) 0:00:27.379 ********** 2026-04-17 00:43:39.745452 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745456 | orchestrator | 2026-04-17 00:43:39.745471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745475 | orchestrator | Friday 17 April 2026 00:43:33 +0000 (0:00:00.185) 0:00:27.565 ********** 2026-04-17 00:43:39.745479 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745482 | orchestrator | 2026-04-17 00:43:39.745486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745490 | orchestrator | Friday 17 April 2026 00:43:33 +0000 (0:00:00.181) 0:00:27.746 ********** 2026-04-17 00:43:39.745494 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745497 | orchestrator | 2026-04-17 00:43:39.745501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745510 | orchestrator | Friday 17 April 2026 00:43:33 +0000 (0:00:00.193) 0:00:27.940 ********** 2026-04-17 00:43:39.745514 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745517 | orchestrator | 2026-04-17 00:43:39.745521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745526 | orchestrator | Friday 17 April 2026 00:43:33 +0000 (0:00:00.184) 0:00:28.124 ********** 2026-04-17 00:43:39.745529 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745533 | orchestrator | 2026-04-17 00:43:39.745537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745540 | orchestrator | Friday 17 April 2026 00:43:34 +0000 (0:00:00.192) 0:00:28.316 ********** 2026-04-17 00:43:39.745555 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745562 | orchestrator | 2026-04-17 00:43:39.745568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745578 | orchestrator | Friday 17 April 2026 00:43:34 +0000 (0:00:00.191) 0:00:28.507 ********** 2026-04-17 00:43:39.745587 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-17 00:43:39.745593 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-17 00:43:39.745599 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-17 00:43:39.745604 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-17 00:43:39.745610 | orchestrator | 2026-04-17 00:43:39.745615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745621 | orchestrator | Friday 17 April 2026 00:43:35 +0000 (0:00:00.767) 0:00:29.275 ********** 2026-04-17 00:43:39.745627 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745633 | orchestrator | 2026-04-17 00:43:39.745638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745644 | orchestrator | Friday 17 April 2026 00:43:35 +0000 (0:00:00.167) 0:00:29.442 ********** 2026-04-17 00:43:39.745651 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745657 | orchestrator | 2026-04-17 00:43:39.745663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745669 | orchestrator | Friday 17 April 2026 00:43:35 +0000 (0:00:00.182) 0:00:29.625 ********** 2026-04-17 00:43:39.745675 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745680 | orchestrator | 2026-04-17 00:43:39.745686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:39.745692 | orchestrator | Friday 17 April 2026 00:43:36 +0000 (0:00:00.621) 0:00:30.246 ********** 2026-04-17 00:43:39.745699 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745705 | orchestrator | 2026-04-17 00:43:39.745712 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 00:43:39.745719 | orchestrator | Friday 17 April 2026 00:43:36 +0000 (0:00:00.199) 0:00:30.446 ********** 2026-04-17 00:43:39.745725 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745731 | orchestrator | 2026-04-17 00:43:39.745737 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 00:43:39.745743 | orchestrator | Friday 17 April 2026 00:43:36 +0000 (0:00:00.137) 0:00:30.583 ********** 2026-04-17 00:43:39.745750 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f135813a-7de6-5823-bba0-0d89f58fd8f7'}}) 2026-04-17 00:43:39.745757 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}}) 2026-04-17 00:43:39.745764 | orchestrator | 2026-04-17 00:43:39.745770 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 00:43:39.745777 | orchestrator | Friday 17 April 2026 00:43:36 +0000 (0:00:00.196) 0:00:30.780 ********** 2026-04-17 00:43:39.745785 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'}) 2026-04-17 00:43:39.745794 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}) 2026-04-17 00:43:39.745807 | orchestrator | 2026-04-17 00:43:39.745814 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 00:43:39.745821 | orchestrator | Friday 17 April 2026 00:43:38 +0000 (0:00:01.842) 0:00:32.623 ********** 2026-04-17 00:43:39.745826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:39.745832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:39.745836 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:39.745841 | orchestrator | 2026-04-17 00:43:39.745845 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 00:43:39.745850 | orchestrator | Friday 17 April 2026 00:43:38 +0000 (0:00:00.145) 0:00:32.768 ********** 2026-04-17 00:43:39.745854 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'}) 2026-04-17 00:43:39.745864 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}) 2026-04-17 00:43:45.046349 | orchestrator | 2026-04-17 00:43:45.046510 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 00:43:45.046526 | orchestrator | Friday 17 April 2026 00:43:39 +0000 (0:00:01.280) 0:00:34.049 ********** 2026-04-17 00:43:45.046536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.046547 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.046556 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046566 | orchestrator | 2026-04-17 00:43:45.046575 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 00:43:45.046584 | orchestrator | Friday 17 April 2026 00:43:39 +0000 (0:00:00.150) 0:00:34.200 ********** 2026-04-17 00:43:45.046593 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046601 | orchestrator | 2026-04-17 00:43:45.046610 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 00:43:45.046619 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.133) 0:00:34.333 ********** 2026-04-17 00:43:45.046642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.046651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.046660 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046669 | orchestrator | 2026-04-17 00:43:45.046681 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 00:43:45.046696 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.163) 0:00:34.497 ********** 2026-04-17 00:43:45.046711 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046725 | orchestrator | 2026-04-17 00:43:45.046740 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 00:43:45.046754 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.119) 0:00:34.616 ********** 2026-04-17 00:43:45.046769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.046785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.046826 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046843 | orchestrator | 2026-04-17 00:43:45.046861 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 00:43:45.046878 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.140) 0:00:34.757 ********** 2026-04-17 00:43:45.046892 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046910 | orchestrator | 2026-04-17 00:43:45.046927 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 00:43:45.046943 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.275) 0:00:35.033 ********** 2026-04-17 00:43:45.046953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.046964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.046974 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.046984 | orchestrator | 2026-04-17 00:43:45.046994 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 00:43:45.047005 | orchestrator | Friday 17 April 2026 00:43:40 +0000 (0:00:00.152) 0:00:35.185 ********** 2026-04-17 00:43:45.047021 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:45.047038 | orchestrator | 2026-04-17 00:43:45.047053 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 00:43:45.047068 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.135) 0:00:35.320 ********** 2026-04-17 00:43:45.047083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.047100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.047118 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047134 | orchestrator | 2026-04-17 00:43:45.047150 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 00:43:45.047166 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.144) 0:00:35.465 ********** 2026-04-17 00:43:45.047179 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.047190 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.047201 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047211 | orchestrator | 2026-04-17 00:43:45.047221 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 00:43:45.047249 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.147) 0:00:35.613 ********** 2026-04-17 00:43:45.047258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:45.047267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:45.047275 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047284 | orchestrator | 2026-04-17 00:43:45.047292 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 00:43:45.047301 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.145) 0:00:35.758 ********** 2026-04-17 00:43:45.047309 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047318 | orchestrator | 2026-04-17 00:43:45.047327 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 00:43:45.047335 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.125) 0:00:35.884 ********** 2026-04-17 00:43:45.047353 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047362 | orchestrator | 2026-04-17 00:43:45.047370 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 00:43:45.047408 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.118) 0:00:36.003 ********** 2026-04-17 00:43:45.047422 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047431 | orchestrator | 2026-04-17 00:43:45.047440 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 00:43:45.047448 | orchestrator | Friday 17 April 2026 00:43:41 +0000 (0:00:00.115) 0:00:36.118 ********** 2026-04-17 00:43:45.047457 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:43:45.047465 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 00:43:45.047475 | orchestrator | } 2026-04-17 00:43:45.047484 | orchestrator | 2026-04-17 00:43:45.047492 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 00:43:45.047501 | orchestrator | Friday 17 April 2026 00:43:42 +0000 (0:00:00.123) 0:00:36.241 ********** 2026-04-17 00:43:45.047510 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:43:45.047518 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 00:43:45.047527 | orchestrator | } 2026-04-17 00:43:45.047536 | orchestrator | 2026-04-17 00:43:45.047544 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 00:43:45.047553 | orchestrator | Friday 17 April 2026 00:43:42 +0000 (0:00:00.126) 0:00:36.367 ********** 2026-04-17 00:43:45.047561 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:43:45.047570 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 00:43:45.047579 | orchestrator | } 2026-04-17 00:43:45.047588 | orchestrator | 2026-04-17 00:43:45.047596 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 00:43:45.047605 | orchestrator | Friday 17 April 2026 00:43:42 +0000 (0:00:00.125) 0:00:36.493 ********** 2026-04-17 00:43:45.047614 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:45.047622 | orchestrator | 2026-04-17 00:43:45.047631 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 00:43:45.047639 | orchestrator | Friday 17 April 2026 00:43:42 +0000 (0:00:00.666) 0:00:37.160 ********** 2026-04-17 00:43:45.047648 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:45.047657 | orchestrator | 2026-04-17 00:43:45.047665 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 00:43:45.047674 | orchestrator | Friday 17 April 2026 00:43:43 +0000 (0:00:00.506) 0:00:37.666 ********** 2026-04-17 00:43:45.047682 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:45.047691 | orchestrator | 2026-04-17 00:43:45.047700 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 00:43:45.047708 | orchestrator | Friday 17 April 2026 00:43:43 +0000 (0:00:00.557) 0:00:38.224 ********** 2026-04-17 00:43:45.047717 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:45.047725 | orchestrator | 2026-04-17 00:43:45.047734 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 00:43:45.047742 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.151) 0:00:38.375 ********** 2026-04-17 00:43:45.047751 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047759 | orchestrator | 2026-04-17 00:43:45.047768 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 00:43:45.047777 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.097) 0:00:38.473 ********** 2026-04-17 00:43:45.047785 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047794 | orchestrator | 2026-04-17 00:43:45.047802 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 00:43:45.047811 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.103) 0:00:38.576 ********** 2026-04-17 00:43:45.047820 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:43:45.047829 | orchestrator |  "vgs_report": { 2026-04-17 00:43:45.047837 | orchestrator |  "vg": [] 2026-04-17 00:43:45.047846 | orchestrator |  } 2026-04-17 00:43:45.047855 | orchestrator | } 2026-04-17 00:43:45.047872 | orchestrator | 2026-04-17 00:43:45.047881 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 00:43:45.047890 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.154) 0:00:38.730 ********** 2026-04-17 00:43:45.047898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047907 | orchestrator | 2026-04-17 00:43:45.047915 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 00:43:45.047924 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.130) 0:00:38.861 ********** 2026-04-17 00:43:45.047932 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047941 | orchestrator | 2026-04-17 00:43:45.047949 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 00:43:45.047958 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.138) 0:00:38.999 ********** 2026-04-17 00:43:45.047966 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.047975 | orchestrator | 2026-04-17 00:43:45.047983 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 00:43:45.047992 | orchestrator | Friday 17 April 2026 00:43:44 +0000 (0:00:00.130) 0:00:39.130 ********** 2026-04-17 00:43:45.048001 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:45.048009 | orchestrator | 2026-04-17 00:43:45.048024 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 00:43:49.573664 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.138) 0:00:39.268 ********** 2026-04-17 00:43:49.573747 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573758 | orchestrator | 2026-04-17 00:43:49.573766 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 00:43:49.573772 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.137) 0:00:39.406 ********** 2026-04-17 00:43:49.573779 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573785 | orchestrator | 2026-04-17 00:43:49.573802 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 00:43:49.573809 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.358) 0:00:39.765 ********** 2026-04-17 00:43:49.573815 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573821 | orchestrator | 2026-04-17 00:43:49.573828 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 00:43:49.573834 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.146) 0:00:39.912 ********** 2026-04-17 00:43:49.573840 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573846 | orchestrator | 2026-04-17 00:43:49.573853 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 00:43:49.573859 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.148) 0:00:40.060 ********** 2026-04-17 00:43:49.573865 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573871 | orchestrator | 2026-04-17 00:43:49.573877 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 00:43:49.573884 | orchestrator | Friday 17 April 2026 00:43:45 +0000 (0:00:00.124) 0:00:40.185 ********** 2026-04-17 00:43:49.573890 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573896 | orchestrator | 2026-04-17 00:43:49.573902 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 00:43:49.573912 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.148) 0:00:40.333 ********** 2026-04-17 00:43:49.573923 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.573936 | orchestrator | 2026-04-17 00:43:49.573971 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 00:43:49.573982 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.130) 0:00:40.464 ********** 2026-04-17 00:43:49.573992 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574002 | orchestrator | 2026-04-17 00:43:49.574085 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 00:43:49.574100 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.134) 0:00:40.598 ********** 2026-04-17 00:43:49.574110 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574141 | orchestrator | 2026-04-17 00:43:49.574151 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 00:43:49.574161 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.124) 0:00:40.722 ********** 2026-04-17 00:43:49.574172 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574182 | orchestrator | 2026-04-17 00:43:49.574192 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 00:43:49.574202 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.134) 0:00:40.857 ********** 2026-04-17 00:43:49.574213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574229 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574236 | orchestrator | 2026-04-17 00:43:49.574243 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 00:43:49.574251 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.163) 0:00:41.020 ********** 2026-04-17 00:43:49.574258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574273 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574280 | orchestrator | 2026-04-17 00:43:49.574287 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 00:43:49.574294 | orchestrator | Friday 17 April 2026 00:43:46 +0000 (0:00:00.144) 0:00:41.165 ********** 2026-04-17 00:43:49.574301 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574308 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574315 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574322 | orchestrator | 2026-04-17 00:43:49.574328 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 00:43:49.574335 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.144) 0:00:41.309 ********** 2026-04-17 00:43:49.574342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574357 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574364 | orchestrator | 2026-04-17 00:43:49.574408 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 00:43:49.574416 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.349) 0:00:41.659 ********** 2026-04-17 00:43:49.574423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574438 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574445 | orchestrator | 2026-04-17 00:43:49.574452 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 00:43:49.574459 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.139) 0:00:41.799 ********** 2026-04-17 00:43:49.574471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574491 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574498 | orchestrator | 2026-04-17 00:43:49.574505 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 00:43:49.574512 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.141) 0:00:41.940 ********** 2026-04-17 00:43:49.574519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574534 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574540 | orchestrator | 2026-04-17 00:43:49.574548 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 00:43:49.574554 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.148) 0:00:42.088 ********** 2026-04-17 00:43:49.574561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574576 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574582 | orchestrator | 2026-04-17 00:43:49.574589 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 00:43:49.574595 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.136) 0:00:42.225 ********** 2026-04-17 00:43:49.574601 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:49.574607 | orchestrator | 2026-04-17 00:43:49.574613 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 00:43:49.574619 | orchestrator | Friday 17 April 2026 00:43:48 +0000 (0:00:00.543) 0:00:42.768 ********** 2026-04-17 00:43:49.574625 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:49.574632 | orchestrator | 2026-04-17 00:43:49.574638 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 00:43:49.574644 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.488) 0:00:43.257 ********** 2026-04-17 00:43:49.574650 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:43:49.574656 | orchestrator | 2026-04-17 00:43:49.574662 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 00:43:49.574668 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.154) 0:00:43.411 ********** 2026-04-17 00:43:49.574674 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'vg_name': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}) 2026-04-17 00:43:49.574681 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'vg_name': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'}) 2026-04-17 00:43:49.574687 | orchestrator | 2026-04-17 00:43:49.574693 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 00:43:49.574699 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.167) 0:00:43.579 ********** 2026-04-17 00:43:49.574706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:49.574718 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:49.574728 | orchestrator | 2026-04-17 00:43:49.574734 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 00:43:49.574740 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.145) 0:00:43.725 ********** 2026-04-17 00:43:49.574747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:49.574757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:55.338885 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:55.339011 | orchestrator | 2026-04-17 00:43:55.339038 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 00:43:55.339064 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.162) 0:00:43.887 ********** 2026-04-17 00:43:55.339075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'})  2026-04-17 00:43:55.339088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'})  2026-04-17 00:43:55.339099 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:43:55.339110 | orchestrator | 2026-04-17 00:43:55.339121 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 00:43:55.339133 | orchestrator | Friday 17 April 2026 00:43:49 +0000 (0:00:00.138) 0:00:44.025 ********** 2026-04-17 00:43:55.339144 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 00:43:55.339155 | orchestrator |  "lvm_report": { 2026-04-17 00:43:55.339167 | orchestrator |  "lv": [ 2026-04-17 00:43:55.339193 | orchestrator |  { 2026-04-17 00:43:55.339205 | orchestrator |  "lv_name": "osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb", 2026-04-17 00:43:55.339217 | orchestrator |  "vg_name": "ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb" 2026-04-17 00:43:55.339228 | orchestrator |  }, 2026-04-17 00:43:55.339239 | orchestrator |  { 2026-04-17 00:43:55.339249 | orchestrator |  "lv_name": "osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7", 2026-04-17 00:43:55.339260 | orchestrator |  "vg_name": "ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7" 2026-04-17 00:43:55.339271 | orchestrator |  } 2026-04-17 00:43:55.339283 | orchestrator |  ], 2026-04-17 00:43:55.339293 | orchestrator |  "pv": [ 2026-04-17 00:43:55.339304 | orchestrator |  { 2026-04-17 00:43:55.339315 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 00:43:55.339326 | orchestrator |  "vg_name": "ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7" 2026-04-17 00:43:55.339337 | orchestrator |  }, 2026-04-17 00:43:55.339348 | orchestrator |  { 2026-04-17 00:43:55.339359 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 00:43:55.339401 | orchestrator |  "vg_name": "ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb" 2026-04-17 00:43:55.339415 | orchestrator |  } 2026-04-17 00:43:55.339428 | orchestrator |  ] 2026-04-17 00:43:55.339440 | orchestrator |  } 2026-04-17 00:43:55.339452 | orchestrator | } 2026-04-17 00:43:55.339465 | orchestrator | 2026-04-17 00:43:55.339477 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-17 00:43:55.339489 | orchestrator | 2026-04-17 00:43:55.339501 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 00:43:55.339514 | orchestrator | Friday 17 April 2026 00:43:50 +0000 (0:00:00.396) 0:00:44.422 ********** 2026-04-17 00:43:55.339526 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-17 00:43:55.339538 | orchestrator | 2026-04-17 00:43:55.339551 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-17 00:43:55.339562 | orchestrator | Friday 17 April 2026 00:43:50 +0000 (0:00:00.223) 0:00:44.646 ********** 2026-04-17 00:43:55.339596 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:43:55.339609 | orchestrator | 2026-04-17 00:43:55.339621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.339633 | orchestrator | Friday 17 April 2026 00:43:50 +0000 (0:00:00.232) 0:00:44.878 ********** 2026-04-17 00:43:55.339645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-17 00:43:55.339656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-17 00:43:55.339668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-17 00:43:55.339684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-17 00:43:55.339698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-17 00:43:55.339718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-17 00:43:55.339730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-17 00:43:55.339742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-17 00:43:55.339754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-17 00:43:55.339766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-17 00:43:55.339776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-17 00:43:55.339787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-17 00:43:55.339798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-17 00:43:55.339808 | orchestrator | 2026-04-17 00:43:55.339819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.339830 | orchestrator | Friday 17 April 2026 00:43:51 +0000 (0:00:00.413) 0:00:45.292 ********** 2026-04-17 00:43:55.339840 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.339851 | orchestrator | 2026-04-17 00:43:55.339862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.339872 | orchestrator | Friday 17 April 2026 00:43:51 +0000 (0:00:00.194) 0:00:45.487 ********** 2026-04-17 00:43:55.339883 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.339894 | orchestrator | 2026-04-17 00:43:55.339905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.339933 | orchestrator | Friday 17 April 2026 00:43:51 +0000 (0:00:00.192) 0:00:45.680 ********** 2026-04-17 00:43:55.339945 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.339956 | orchestrator | 2026-04-17 00:43:55.339967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.339977 | orchestrator | Friday 17 April 2026 00:43:51 +0000 (0:00:00.183) 0:00:45.863 ********** 2026-04-17 00:43:55.339988 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.339999 | orchestrator | 2026-04-17 00:43:55.340010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340020 | orchestrator | Friday 17 April 2026 00:43:51 +0000 (0:00:00.218) 0:00:46.082 ********** 2026-04-17 00:43:55.340031 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.340041 | orchestrator | 2026-04-17 00:43:55.340052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340063 | orchestrator | Friday 17 April 2026 00:43:52 +0000 (0:00:00.204) 0:00:46.286 ********** 2026-04-17 00:43:55.340074 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.340084 | orchestrator | 2026-04-17 00:43:55.340095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340106 | orchestrator | Friday 17 April 2026 00:43:52 +0000 (0:00:00.491) 0:00:46.777 ********** 2026-04-17 00:43:55.340117 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.340136 | orchestrator | 2026-04-17 00:43:55.340147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340157 | orchestrator | Friday 17 April 2026 00:43:52 +0000 (0:00:00.182) 0:00:46.960 ********** 2026-04-17 00:43:55.340168 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:43:55.340179 | orchestrator | 2026-04-17 00:43:55.340190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340201 | orchestrator | Friday 17 April 2026 00:43:52 +0000 (0:00:00.145) 0:00:47.105 ********** 2026-04-17 00:43:55.340211 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6) 2026-04-17 00:43:55.340223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6) 2026-04-17 00:43:55.340234 | orchestrator | 2026-04-17 00:43:55.340245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340255 | orchestrator | Friday 17 April 2026 00:43:53 +0000 (0:00:00.458) 0:00:47.564 ********** 2026-04-17 00:43:55.340272 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363) 2026-04-17 00:43:55.340287 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363) 2026-04-17 00:43:55.340298 | orchestrator | 2026-04-17 00:43:55.340309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340319 | orchestrator | Friday 17 April 2026 00:43:53 +0000 (0:00:00.464) 0:00:48.028 ********** 2026-04-17 00:43:55.340330 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06) 2026-04-17 00:43:55.340341 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06) 2026-04-17 00:43:55.340351 | orchestrator | 2026-04-17 00:43:55.340362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340430 | orchestrator | Friday 17 April 2026 00:43:54 +0000 (0:00:00.439) 0:00:48.467 ********** 2026-04-17 00:43:55.340444 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b) 2026-04-17 00:43:55.340455 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b) 2026-04-17 00:43:55.340465 | orchestrator | 2026-04-17 00:43:55.340476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-17 00:43:55.340487 | orchestrator | Friday 17 April 2026 00:43:54 +0000 (0:00:00.448) 0:00:48.916 ********** 2026-04-17 00:43:55.340498 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-17 00:43:55.340509 | orchestrator | 2026-04-17 00:43:55.340519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:43:55.340530 | orchestrator | Friday 17 April 2026 00:43:55 +0000 (0:00:00.329) 0:00:49.245 ********** 2026-04-17 00:43:55.340541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-17 00:43:55.340552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-17 00:43:55.340563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-17 00:43:55.340574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-17 00:43:55.340584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-17 00:43:55.340595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-17 00:43:55.340643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-17 00:43:55.340655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-17 00:43:55.340666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-17 00:43:55.340685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-17 00:43:55.340696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-17 00:43:55.340715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-17 00:44:04.022887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-17 00:44:04.023038 | orchestrator | 2026-04-17 00:44:04.023062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023081 | orchestrator | Friday 17 April 2026 00:43:55 +0000 (0:00:00.398) 0:00:49.644 ********** 2026-04-17 00:44:04.023100 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023119 | orchestrator | 2026-04-17 00:44:04.023137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023155 | orchestrator | Friday 17 April 2026 00:43:55 +0000 (0:00:00.189) 0:00:49.833 ********** 2026-04-17 00:44:04.023172 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023190 | orchestrator | 2026-04-17 00:44:04.023207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023225 | orchestrator | Friday 17 April 2026 00:43:55 +0000 (0:00:00.172) 0:00:50.005 ********** 2026-04-17 00:44:04.023242 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023261 | orchestrator | 2026-04-17 00:44:04.023279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023316 | orchestrator | Friday 17 April 2026 00:43:56 +0000 (0:00:00.531) 0:00:50.537 ********** 2026-04-17 00:44:04.023335 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023352 | orchestrator | 2026-04-17 00:44:04.023468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023491 | orchestrator | Friday 17 April 2026 00:43:56 +0000 (0:00:00.196) 0:00:50.734 ********** 2026-04-17 00:44:04.023511 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023528 | orchestrator | 2026-04-17 00:44:04.023547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023565 | orchestrator | Friday 17 April 2026 00:43:56 +0000 (0:00:00.188) 0:00:50.922 ********** 2026-04-17 00:44:04.023583 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023600 | orchestrator | 2026-04-17 00:44:04.023617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023637 | orchestrator | Friday 17 April 2026 00:43:56 +0000 (0:00:00.196) 0:00:51.119 ********** 2026-04-17 00:44:04.023656 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023677 | orchestrator | 2026-04-17 00:44:04.023696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023715 | orchestrator | Friday 17 April 2026 00:43:57 +0000 (0:00:00.200) 0:00:51.319 ********** 2026-04-17 00:44:04.023733 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023751 | orchestrator | 2026-04-17 00:44:04.023769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023788 | orchestrator | Friday 17 April 2026 00:43:57 +0000 (0:00:00.183) 0:00:51.502 ********** 2026-04-17 00:44:04.023807 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-17 00:44:04.023826 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-17 00:44:04.023844 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-17 00:44:04.023862 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-17 00:44:04.023879 | orchestrator | 2026-04-17 00:44:04.023897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.023915 | orchestrator | Friday 17 April 2026 00:43:57 +0000 (0:00:00.573) 0:00:52.076 ********** 2026-04-17 00:44:04.023933 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.023949 | orchestrator | 2026-04-17 00:44:04.023967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.024015 | orchestrator | Friday 17 April 2026 00:43:58 +0000 (0:00:00.185) 0:00:52.262 ********** 2026-04-17 00:44:04.024034 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024051 | orchestrator | 2026-04-17 00:44:04.024070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.024088 | orchestrator | Friday 17 April 2026 00:43:58 +0000 (0:00:00.201) 0:00:52.463 ********** 2026-04-17 00:44:04.024106 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024123 | orchestrator | 2026-04-17 00:44:04.024141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-17 00:44:04.024159 | orchestrator | Friday 17 April 2026 00:43:58 +0000 (0:00:00.192) 0:00:52.656 ********** 2026-04-17 00:44:04.024177 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024194 | orchestrator | 2026-04-17 00:44:04.024211 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-17 00:44:04.024228 | orchestrator | Friday 17 April 2026 00:43:58 +0000 (0:00:00.211) 0:00:52.867 ********** 2026-04-17 00:44:04.024247 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024264 | orchestrator | 2026-04-17 00:44:04.024282 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-17 00:44:04.024299 | orchestrator | Friday 17 April 2026 00:43:59 +0000 (0:00:00.513) 0:00:53.380 ********** 2026-04-17 00:44:04.024316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd097a065-5c07-563d-9f82-653f6f04c198'}}) 2026-04-17 00:44:04.024334 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '037810f1-d9a1-54dd-a4a8-d143a432af64'}}) 2026-04-17 00:44:04.024351 | orchestrator | 2026-04-17 00:44:04.024390 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-17 00:44:04.024407 | orchestrator | Friday 17 April 2026 00:43:59 +0000 (0:00:00.241) 0:00:53.622 ********** 2026-04-17 00:44:04.024425 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'}) 2026-04-17 00:44:04.024442 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'}) 2026-04-17 00:44:04.024457 | orchestrator | 2026-04-17 00:44:04.024472 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-17 00:44:04.024517 | orchestrator | Friday 17 April 2026 00:44:01 +0000 (0:00:01.861) 0:00:55.483 ********** 2026-04-17 00:44:04.024536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:04.024555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:04.024572 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024590 | orchestrator | 2026-04-17 00:44:04.024608 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-17 00:44:04.024625 | orchestrator | Friday 17 April 2026 00:44:01 +0000 (0:00:00.129) 0:00:55.612 ********** 2026-04-17 00:44:04.024641 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'}) 2026-04-17 00:44:04.024669 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'}) 2026-04-17 00:44:04.024687 | orchestrator | 2026-04-17 00:44:04.024705 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-17 00:44:04.024723 | orchestrator | Friday 17 April 2026 00:44:02 +0000 (0:00:01.389) 0:00:57.002 ********** 2026-04-17 00:44:04.024741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:04.024772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:04.024790 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024806 | orchestrator | 2026-04-17 00:44:04.024822 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-17 00:44:04.024838 | orchestrator | Friday 17 April 2026 00:44:02 +0000 (0:00:00.122) 0:00:57.124 ********** 2026-04-17 00:44:04.024854 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024870 | orchestrator | 2026-04-17 00:44:04.024886 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-17 00:44:04.024902 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.144) 0:00:57.268 ********** 2026-04-17 00:44:04.024917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:04.024934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:04.024950 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.024965 | orchestrator | 2026-04-17 00:44:04.024981 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-17 00:44:04.024997 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.148) 0:00:57.417 ********** 2026-04-17 00:44:04.025012 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.025029 | orchestrator | 2026-04-17 00:44:04.025045 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-17 00:44:04.025062 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.109) 0:00:57.527 ********** 2026-04-17 00:44:04.025079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:04.025096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:04.025112 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.025128 | orchestrator | 2026-04-17 00:44:04.025145 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-17 00:44:04.025165 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.125) 0:00:57.653 ********** 2026-04-17 00:44:04.025189 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.025205 | orchestrator | 2026-04-17 00:44:04.025223 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-17 00:44:04.025239 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.131) 0:00:57.784 ********** 2026-04-17 00:44:04.025256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:04.025270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:04.025286 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:04.025301 | orchestrator | 2026-04-17 00:44:04.025317 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-17 00:44:04.025333 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.119) 0:00:57.904 ********** 2026-04-17 00:44:04.025350 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:04.025418 | orchestrator | 2026-04-17 00:44:04.025438 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-17 00:44:04.025455 | orchestrator | Friday 17 April 2026 00:44:03 +0000 (0:00:00.286) 0:00:58.190 ********** 2026-04-17 00:44:04.025488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:09.437911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:09.438008 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438077 | orchestrator | 2026-04-17 00:44:09.438088 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-17 00:44:09.438099 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.156) 0:00:58.347 ********** 2026-04-17 00:44:09.438108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:09.438118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:09.438127 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438136 | orchestrator | 2026-04-17 00:44:09.438159 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-17 00:44:09.438168 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.124) 0:00:58.471 ********** 2026-04-17 00:44:09.438178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:09.438194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:09.438208 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438232 | orchestrator | 2026-04-17 00:44:09.438249 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-17 00:44:09.438264 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.125) 0:00:58.597 ********** 2026-04-17 00:44:09.438278 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438293 | orchestrator | 2026-04-17 00:44:09.438307 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-17 00:44:09.438324 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.145) 0:00:58.742 ********** 2026-04-17 00:44:09.438339 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438354 | orchestrator | 2026-04-17 00:44:09.438439 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-17 00:44:09.438449 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.117) 0:00:58.859 ********** 2026-04-17 00:44:09.438457 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438469 | orchestrator | 2026-04-17 00:44:09.438480 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-17 00:44:09.438490 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.129) 0:00:58.989 ********** 2026-04-17 00:44:09.438500 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:44:09.438511 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-17 00:44:09.438521 | orchestrator | } 2026-04-17 00:44:09.438532 | orchestrator | 2026-04-17 00:44:09.438542 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-17 00:44:09.438551 | orchestrator | Friday 17 April 2026 00:44:04 +0000 (0:00:00.128) 0:00:59.117 ********** 2026-04-17 00:44:09.438561 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:44:09.438571 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-17 00:44:09.438581 | orchestrator | } 2026-04-17 00:44:09.438591 | orchestrator | 2026-04-17 00:44:09.438601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-17 00:44:09.438611 | orchestrator | Friday 17 April 2026 00:44:05 +0000 (0:00:00.118) 0:00:59.236 ********** 2026-04-17 00:44:09.438621 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:44:09.438632 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-17 00:44:09.438642 | orchestrator | } 2026-04-17 00:44:09.438652 | orchestrator | 2026-04-17 00:44:09.438662 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-17 00:44:09.438672 | orchestrator | Friday 17 April 2026 00:44:05 +0000 (0:00:00.130) 0:00:59.367 ********** 2026-04-17 00:44:09.438703 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:09.438714 | orchestrator | 2026-04-17 00:44:09.438724 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-17 00:44:09.438735 | orchestrator | Friday 17 April 2026 00:44:05 +0000 (0:00:00.502) 0:00:59.869 ********** 2026-04-17 00:44:09.438744 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:09.438754 | orchestrator | 2026-04-17 00:44:09.438763 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-17 00:44:09.438773 | orchestrator | Friday 17 April 2026 00:44:06 +0000 (0:00:00.491) 0:01:00.361 ********** 2026-04-17 00:44:09.438784 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:09.438795 | orchestrator | 2026-04-17 00:44:09.438805 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-17 00:44:09.438815 | orchestrator | Friday 17 April 2026 00:44:06 +0000 (0:00:00.646) 0:01:01.008 ********** 2026-04-17 00:44:09.438825 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:09.438833 | orchestrator | 2026-04-17 00:44:09.438842 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-17 00:44:09.438851 | orchestrator | Friday 17 April 2026 00:44:06 +0000 (0:00:00.118) 0:01:01.127 ********** 2026-04-17 00:44:09.438859 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438868 | orchestrator | 2026-04-17 00:44:09.438876 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-17 00:44:09.438885 | orchestrator | Friday 17 April 2026 00:44:06 +0000 (0:00:00.094) 0:01:01.221 ********** 2026-04-17 00:44:09.438893 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.438902 | orchestrator | 2026-04-17 00:44:09.438910 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-17 00:44:09.438919 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.103) 0:01:01.325 ********** 2026-04-17 00:44:09.438930 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:44:09.438945 | orchestrator |  "vgs_report": { 2026-04-17 00:44:09.438957 | orchestrator |  "vg": [] 2026-04-17 00:44:09.438994 | orchestrator |  } 2026-04-17 00:44:09.439010 | orchestrator | } 2026-04-17 00:44:09.439025 | orchestrator | 2026-04-17 00:44:09.439034 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-17 00:44:09.439043 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.123) 0:01:01.448 ********** 2026-04-17 00:44:09.439052 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439061 | orchestrator | 2026-04-17 00:44:09.439069 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-17 00:44:09.439078 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.124) 0:01:01.573 ********** 2026-04-17 00:44:09.439087 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439095 | orchestrator | 2026-04-17 00:44:09.439104 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-17 00:44:09.439112 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.162) 0:01:01.736 ********** 2026-04-17 00:44:09.439121 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439129 | orchestrator | 2026-04-17 00:44:09.439138 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-17 00:44:09.439146 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.132) 0:01:01.868 ********** 2026-04-17 00:44:09.439155 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439163 | orchestrator | 2026-04-17 00:44:09.439172 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-17 00:44:09.439181 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.105) 0:01:01.973 ********** 2026-04-17 00:44:09.439189 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439198 | orchestrator | 2026-04-17 00:44:09.439207 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-17 00:44:09.439215 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.110) 0:01:02.084 ********** 2026-04-17 00:44:09.439224 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439240 | orchestrator | 2026-04-17 00:44:09.439248 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-17 00:44:09.439257 | orchestrator | Friday 17 April 2026 00:44:07 +0000 (0:00:00.123) 0:01:02.208 ********** 2026-04-17 00:44:09.439265 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439274 | orchestrator | 2026-04-17 00:44:09.439282 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-17 00:44:09.439291 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.122) 0:01:02.330 ********** 2026-04-17 00:44:09.439299 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439308 | orchestrator | 2026-04-17 00:44:09.439316 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-17 00:44:09.439325 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.127) 0:01:02.458 ********** 2026-04-17 00:44:09.439333 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439342 | orchestrator | 2026-04-17 00:44:09.439351 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-17 00:44:09.439382 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.261) 0:01:02.719 ********** 2026-04-17 00:44:09.439393 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439401 | orchestrator | 2026-04-17 00:44:09.439410 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-17 00:44:09.439418 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.118) 0:01:02.838 ********** 2026-04-17 00:44:09.439427 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439435 | orchestrator | 2026-04-17 00:44:09.439444 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-17 00:44:09.439453 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.117) 0:01:02.956 ********** 2026-04-17 00:44:09.439461 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439470 | orchestrator | 2026-04-17 00:44:09.439478 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-17 00:44:09.439487 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.125) 0:01:03.081 ********** 2026-04-17 00:44:09.439495 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439504 | orchestrator | 2026-04-17 00:44:09.439512 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-17 00:44:09.439521 | orchestrator | Friday 17 April 2026 00:44:08 +0000 (0:00:00.124) 0:01:03.206 ********** 2026-04-17 00:44:09.439529 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439538 | orchestrator | 2026-04-17 00:44:09.439547 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-17 00:44:09.439555 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.116) 0:01:03.322 ********** 2026-04-17 00:44:09.439564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:09.439573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:09.439582 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439591 | orchestrator | 2026-04-17 00:44:09.439599 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-17 00:44:09.439608 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.129) 0:01:03.451 ********** 2026-04-17 00:44:09.439626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:09.439635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:09.439644 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:09.439652 | orchestrator | 2026-04-17 00:44:09.439661 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-17 00:44:09.439676 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.157) 0:01:03.609 ********** 2026-04-17 00:44:09.439692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.172703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.172804 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.172820 | orchestrator | 2026-04-17 00:44:12.172834 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-17 00:44:12.172847 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.137) 0:01:03.747 ********** 2026-04-17 00:44:12.172858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.172886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.172899 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.172910 | orchestrator | 2026-04-17 00:44:12.172922 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-17 00:44:12.172934 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.135) 0:01:03.882 ********** 2026-04-17 00:44:12.172945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.172956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.172968 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.172980 | orchestrator | 2026-04-17 00:44:12.172993 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-17 00:44:12.173005 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.143) 0:01:04.026 ********** 2026-04-17 00:44:12.173018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173043 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173055 | orchestrator | 2026-04-17 00:44:12.173067 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-17 00:44:12.173080 | orchestrator | Friday 17 April 2026 00:44:09 +0000 (0:00:00.133) 0:01:04.159 ********** 2026-04-17 00:44:12.173092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173117 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173129 | orchestrator | 2026-04-17 00:44:12.173142 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-17 00:44:12.173154 | orchestrator | Friday 17 April 2026 00:44:10 +0000 (0:00:00.279) 0:01:04.439 ********** 2026-04-17 00:44:12.173167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173191 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173228 | orchestrator | 2026-04-17 00:44:12.173243 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-17 00:44:12.173256 | orchestrator | Friday 17 April 2026 00:44:10 +0000 (0:00:00.142) 0:01:04.582 ********** 2026-04-17 00:44:12.173269 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:12.173284 | orchestrator | 2026-04-17 00:44:12.173297 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-17 00:44:12.173311 | orchestrator | Friday 17 April 2026 00:44:10 +0000 (0:00:00.481) 0:01:05.063 ********** 2026-04-17 00:44:12.173323 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:12.173337 | orchestrator | 2026-04-17 00:44:12.173350 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-17 00:44:12.173386 | orchestrator | Friday 17 April 2026 00:44:11 +0000 (0:00:00.505) 0:01:05.568 ********** 2026-04-17 00:44:12.173400 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:12.173413 | orchestrator | 2026-04-17 00:44:12.173424 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-17 00:44:12.173436 | orchestrator | Friday 17 April 2026 00:44:11 +0000 (0:00:00.128) 0:01:05.697 ********** 2026-04-17 00:44:12.173446 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'vg_name': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'}) 2026-04-17 00:44:12.173461 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'vg_name': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'}) 2026-04-17 00:44:12.173474 | orchestrator | 2026-04-17 00:44:12.173487 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-17 00:44:12.173500 | orchestrator | Friday 17 April 2026 00:44:11 +0000 (0:00:00.144) 0:01:05.841 ********** 2026-04-17 00:44:12.173530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173557 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173570 | orchestrator | 2026-04-17 00:44:12.173583 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-17 00:44:12.173595 | orchestrator | Friday 17 April 2026 00:44:11 +0000 (0:00:00.142) 0:01:05.984 ********** 2026-04-17 00:44:12.173615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173642 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173654 | orchestrator | 2026-04-17 00:44:12.173666 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-17 00:44:12.173677 | orchestrator | Friday 17 April 2026 00:44:11 +0000 (0:00:00.143) 0:01:06.128 ********** 2026-04-17 00:44:12.173688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'})  2026-04-17 00:44:12.173701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'})  2026-04-17 00:44:12.173713 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:12.173726 | orchestrator | 2026-04-17 00:44:12.173738 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-17 00:44:12.173751 | orchestrator | Friday 17 April 2026 00:44:12 +0000 (0:00:00.124) 0:01:06.252 ********** 2026-04-17 00:44:12.173762 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 00:44:12.173774 | orchestrator |  "lvm_report": { 2026-04-17 00:44:12.173787 | orchestrator |  "lv": [ 2026-04-17 00:44:12.173816 | orchestrator |  { 2026-04-17 00:44:12.173829 | orchestrator |  "lv_name": "osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64", 2026-04-17 00:44:12.173843 | orchestrator |  "vg_name": "ceph-037810f1-d9a1-54dd-a4a8-d143a432af64" 2026-04-17 00:44:12.173855 | orchestrator |  }, 2026-04-17 00:44:12.173867 | orchestrator |  { 2026-04-17 00:44:12.173879 | orchestrator |  "lv_name": "osd-block-d097a065-5c07-563d-9f82-653f6f04c198", 2026-04-17 00:44:12.173891 | orchestrator |  "vg_name": "ceph-d097a065-5c07-563d-9f82-653f6f04c198" 2026-04-17 00:44:12.173903 | orchestrator |  } 2026-04-17 00:44:12.173916 | orchestrator |  ], 2026-04-17 00:44:12.173928 | orchestrator |  "pv": [ 2026-04-17 00:44:12.173940 | orchestrator |  { 2026-04-17 00:44:12.173952 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-17 00:44:12.173964 | orchestrator |  "vg_name": "ceph-d097a065-5c07-563d-9f82-653f6f04c198" 2026-04-17 00:44:12.173976 | orchestrator |  }, 2026-04-17 00:44:12.173988 | orchestrator |  { 2026-04-17 00:44:12.174000 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-17 00:44:12.174012 | orchestrator |  "vg_name": "ceph-037810f1-d9a1-54dd-a4a8-d143a432af64" 2026-04-17 00:44:12.174087 | orchestrator |  } 2026-04-17 00:44:12.174100 | orchestrator |  ] 2026-04-17 00:44:12.174113 | orchestrator |  } 2026-04-17 00:44:12.174125 | orchestrator | } 2026-04-17 00:44:12.174137 | orchestrator | 2026-04-17 00:44:12.174150 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:44:12.174162 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 00:44:12.174174 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 00:44:12.174187 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-17 00:44:12.174199 | orchestrator | 2026-04-17 00:44:12.174211 | orchestrator | 2026-04-17 00:44:12.174224 | orchestrator | 2026-04-17 00:44:12.174236 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:44:12.174248 | orchestrator | Friday 17 April 2026 00:44:12 +0000 (0:00:00.131) 0:01:06.384 ********** 2026-04-17 00:44:12.174260 | orchestrator | =============================================================================== 2026-04-17 00:44:12.174273 | orchestrator | Create block VGs -------------------------------------------------------- 5.69s 2026-04-17 00:44:12.174285 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2026-04-17 00:44:12.174297 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-04-17 00:44:12.174309 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.70s 2026-04-17 00:44:12.174321 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2026-04-17 00:44:12.174333 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2026-04-17 00:44:12.174345 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.48s 2026-04-17 00:44:12.174379 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2026-04-17 00:44:12.174401 | orchestrator | Add known links to the list of available block devices ------------------ 1.14s 2026-04-17 00:44:12.449862 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-04-17 00:44:12.449966 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.78s 2026-04-17 00:44:12.450010 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-17 00:44:12.450083 | orchestrator | Print LVM report data --------------------------------------------------- 0.76s 2026-04-17 00:44:12.450094 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-17 00:44:12.450135 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.67s 2026-04-17 00:44:12.450148 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-04-17 00:44:12.450219 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.64s 2026-04-17 00:44:12.450233 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-17 00:44:12.450245 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.61s 2026-04-17 00:44:12.450257 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.60s 2026-04-17 00:44:24.045260 | orchestrator | 2026-04-17 00:44:24 | INFO  | Prepare task for execution of facts. 2026-04-17 00:44:24.119831 | orchestrator | 2026-04-17 00:44:24 | INFO  | Task dcd7a77d-8b6a-4e50-980c-a8db004f7b17 (facts) was prepared for execution. 2026-04-17 00:44:24.119912 | orchestrator | 2026-04-17 00:44:24 | INFO  | It takes a moment until task dcd7a77d-8b6a-4e50-980c-a8db004f7b17 (facts) has been started and output is visible here. 2026-04-17 00:44:35.806864 | orchestrator | 2026-04-17 00:44:35.806965 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 00:44:35.806980 | orchestrator | 2026-04-17 00:44:35.806991 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 00:44:35.807002 | orchestrator | Friday 17 April 2026 00:44:27 +0000 (0:00:00.346) 0:00:00.346 ********** 2026-04-17 00:44:35.807012 | orchestrator | ok: [testbed-manager] 2026-04-17 00:44:35.807023 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:44:35.807033 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:44:35.807042 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:44:35.807052 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:44:35.807062 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:44:35.807072 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:35.807081 | orchestrator | 2026-04-17 00:44:35.807091 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 00:44:35.807101 | orchestrator | Friday 17 April 2026 00:44:28 +0000 (0:00:01.343) 0:00:01.690 ********** 2026-04-17 00:44:35.807111 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:44:35.807121 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:44:35.807131 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:44:35.807141 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:44:35.807150 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:44:35.807160 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:44:35.807169 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:35.807179 | orchestrator | 2026-04-17 00:44:35.807189 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 00:44:35.807198 | orchestrator | 2026-04-17 00:44:35.807236 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 00:44:35.807246 | orchestrator | Friday 17 April 2026 00:44:30 +0000 (0:00:01.221) 0:00:02.911 ********** 2026-04-17 00:44:35.807256 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:44:35.807266 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:44:35.807275 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:44:35.807285 | orchestrator | ok: [testbed-manager] 2026-04-17 00:44:35.807294 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:44:35.807304 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:44:35.807313 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:44:35.807323 | orchestrator | 2026-04-17 00:44:35.807375 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 00:44:35.807386 | orchestrator | 2026-04-17 00:44:35.807396 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 00:44:35.807406 | orchestrator | Friday 17 April 2026 00:44:34 +0000 (0:00:04.928) 0:00:07.839 ********** 2026-04-17 00:44:35.807418 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:44:35.807430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:44:35.807463 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:44:35.807475 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:44:35.807486 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:44:35.807497 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:44:35.807508 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:44:35.807519 | orchestrator | 2026-04-17 00:44:35.807530 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:44:35.807542 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807554 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807566 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807577 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807588 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807600 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807611 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:44:35.807622 | orchestrator | 2026-04-17 00:44:35.807632 | orchestrator | 2026-04-17 00:44:35.807643 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:44:35.807653 | orchestrator | Friday 17 April 2026 00:44:35 +0000 (0:00:00.514) 0:00:08.354 ********** 2026-04-17 00:44:35.807663 | orchestrator | =============================================================================== 2026-04-17 00:44:35.807672 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.93s 2026-04-17 00:44:35.807682 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-17 00:44:35.807705 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2026-04-17 00:44:35.807714 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-04-17 00:44:47.256270 | orchestrator | 2026-04-17 00:44:47 | INFO  | Prepare task for execution of frr. 2026-04-17 00:44:47.328858 | orchestrator | 2026-04-17 00:44:47 | INFO  | Task dcd9d758-d153-4cb9-87d6-333ecfdd461e (frr) was prepared for execution. 2026-04-17 00:44:47.328946 | orchestrator | 2026-04-17 00:44:47 | INFO  | It takes a moment until task dcd9d758-d153-4cb9-87d6-333ecfdd461e (frr) has been started and output is visible here. 2026-04-17 00:45:11.112221 | orchestrator | 2026-04-17 00:45:11.112363 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-17 00:45:11.112383 | orchestrator | 2026-04-17 00:45:11.112396 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-17 00:45:11.112408 | orchestrator | Friday 17 April 2026 00:44:50 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-04-17 00:45:11.112420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:45:11.112434 | orchestrator | 2026-04-17 00:45:11.112446 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-17 00:45:11.112458 | orchestrator | Friday 17 April 2026 00:44:50 +0000 (0:00:00.215) 0:00:00.536 ********** 2026-04-17 00:45:11.112469 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:11.112482 | orchestrator | 2026-04-17 00:45:11.112494 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-17 00:45:11.112527 | orchestrator | Friday 17 April 2026 00:44:52 +0000 (0:00:01.404) 0:00:01.940 ********** 2026-04-17 00:45:11.112540 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:11.112551 | orchestrator | 2026-04-17 00:45:11.112562 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-17 00:45:11.112574 | orchestrator | Friday 17 April 2026 00:45:00 +0000 (0:00:08.714) 0:00:10.655 ********** 2026-04-17 00:45:11.112586 | orchestrator | ok: [testbed-manager] 2026-04-17 00:45:11.112598 | orchestrator | 2026-04-17 00:45:11.112611 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-17 00:45:11.112623 | orchestrator | Friday 17 April 2026 00:45:01 +0000 (0:00:01.000) 0:00:11.656 ********** 2026-04-17 00:45:11.112635 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:11.112646 | orchestrator | 2026-04-17 00:45:11.112659 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-17 00:45:11.112671 | orchestrator | Friday 17 April 2026 00:45:02 +0000 (0:00:00.947) 0:00:12.603 ********** 2026-04-17 00:45:11.112683 | orchestrator | ok: [testbed-manager] 2026-04-17 00:45:11.112695 | orchestrator | 2026-04-17 00:45:11.112707 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-17 00:45:11.112720 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:01.188) 0:00:13.792 ********** 2026-04-17 00:45:11.112731 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:45:11.112742 | orchestrator | 2026-04-17 00:45:11.112754 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-17 00:45:11.112766 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:00.167) 0:00:13.960 ********** 2026-04-17 00:45:11.112778 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:45:11.112790 | orchestrator | 2026-04-17 00:45:11.112803 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-17 00:45:11.112817 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:00.274) 0:00:14.234 ********** 2026-04-17 00:45:11.112829 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:45:11.112841 | orchestrator | 2026-04-17 00:45:11.112853 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-17 00:45:11.112865 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:00.187) 0:00:14.422 ********** 2026-04-17 00:45:11.112877 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:45:11.112889 | orchestrator | 2026-04-17 00:45:11.112903 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-17 00:45:11.112915 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:00.128) 0:00:14.551 ********** 2026-04-17 00:45:11.112928 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:45:11.112940 | orchestrator | 2026-04-17 00:45:11.112953 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-17 00:45:11.112965 | orchestrator | Friday 17 April 2026 00:45:04 +0000 (0:00:00.142) 0:00:14.693 ********** 2026-04-17 00:45:11.112979 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:11.112993 | orchestrator | 2026-04-17 00:45:11.113006 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-17 00:45:11.113018 | orchestrator | Friday 17 April 2026 00:45:05 +0000 (0:00:00.968) 0:00:15.662 ********** 2026-04-17 00:45:11.113031 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-17 00:45:11.113044 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-17 00:45:11.113059 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-17 00:45:11.113072 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-17 00:45:11.113085 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-17 00:45:11.113099 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-17 00:45:11.113126 | orchestrator | 2026-04-17 00:45:11.113138 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-17 00:45:11.113151 | orchestrator | Friday 17 April 2026 00:45:08 +0000 (0:00:02.215) 0:00:17.877 ********** 2026-04-17 00:45:11.113164 | orchestrator | ok: [testbed-manager] 2026-04-17 00:45:11.113176 | orchestrator | 2026-04-17 00:45:11.113187 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-17 00:45:11.113199 | orchestrator | Friday 17 April 2026 00:45:09 +0000 (0:00:01.148) 0:00:19.026 ********** 2026-04-17 00:45:11.113211 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:11.113224 | orchestrator | 2026-04-17 00:45:11.113236 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:45:11.113247 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 00:45:11.113259 | orchestrator | 2026-04-17 00:45:11.113270 | orchestrator | 2026-04-17 00:45:11.113340 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:45:11.113358 | orchestrator | Friday 17 April 2026 00:45:10 +0000 (0:00:01.418) 0:00:20.444 ********** 2026-04-17 00:45:11.113370 | orchestrator | =============================================================================== 2026-04-17 00:45:11.113382 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.71s 2026-04-17 00:45:11.113411 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.22s 2026-04-17 00:45:11.113423 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2026-04-17 00:45:11.113435 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.40s 2026-04-17 00:45:11.113446 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-04-17 00:45:11.113457 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.15s 2026-04-17 00:45:11.113469 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-04-17 00:45:11.113480 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-04-17 00:45:11.113491 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2026-04-17 00:45:11.113501 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.27s 2026-04-17 00:45:11.113512 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-04-17 00:45:11.113522 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.19s 2026-04-17 00:45:11.113533 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-04-17 00:45:11.113544 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-04-17 00:45:11.113555 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-17 00:45:11.293365 | orchestrator | 2026-04-17 00:45:11.295707 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Apr 17 00:45:11 UTC 2026 2026-04-17 00:45:11.295769 | orchestrator | 2026-04-17 00:45:12.422638 | orchestrator | 2026-04-17 00:45:12 | INFO  | Collection nutshell is prepared for execution 2026-04-17 00:45:12.539846 | orchestrator | 2026-04-17 00:45:12 | INFO  | A [0] - dotfiles 2026-04-17 00:45:22.617437 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - homer 2026-04-17 00:45:22.617555 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - netdata 2026-04-17 00:45:22.617571 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - openstackclient 2026-04-17 00:45:22.617582 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - phpmyadmin 2026-04-17 00:45:22.617594 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - common 2026-04-17 00:45:22.622779 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- loadbalancer 2026-04-17 00:45:22.622855 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [2] --- opensearch 2026-04-17 00:45:22.622892 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [2] --- mariadb-ng 2026-04-17 00:45:22.622984 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [3] ---- horizon 2026-04-17 00:45:22.623011 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [3] ---- keystone 2026-04-17 00:45:22.623416 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- neutron 2026-04-17 00:45:22.623610 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ wait-for-nova 2026-04-17 00:45:22.623952 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [6] ------- octavia 2026-04-17 00:45:22.625911 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- barbican 2026-04-17 00:45:22.625967 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- designate 2026-04-17 00:45:22.625985 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- ironic 2026-04-17 00:45:22.626001 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- placement 2026-04-17 00:45:22.626370 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- magnum 2026-04-17 00:45:22.628148 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- openvswitch 2026-04-17 00:45:22.628198 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [2] --- ovn 2026-04-17 00:45:22.628509 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- memcached 2026-04-17 00:45:22.628594 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- redis 2026-04-17 00:45:22.628841 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- rabbitmq-ng 2026-04-17 00:45:22.629308 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - kubernetes 2026-04-17 00:45:22.632192 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- kubeconfig 2026-04-17 00:45:22.632246 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- copy-kubeconfig 2026-04-17 00:45:22.632703 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [0] - ceph 2026-04-17 00:45:22.635195 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [1] -- ceph-pools 2026-04-17 00:45:22.635399 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [2] --- copy-ceph-keys 2026-04-17 00:45:22.635495 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [3] ---- cephclient 2026-04-17 00:45:22.635509 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-17 00:45:22.635520 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- wait-for-keystone 2026-04-17 00:45:22.635542 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-17 00:45:22.635553 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ glance 2026-04-17 00:45:22.635564 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ cinder 2026-04-17 00:45:22.635814 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ nova 2026-04-17 00:45:22.635835 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [4] ----- prometheus 2026-04-17 00:45:22.636124 | orchestrator | 2026-04-17 00:45:22 | INFO  | A [5] ------ grafana 2026-04-17 00:45:22.854268 | orchestrator | 2026-04-17 00:45:22 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-17 00:45:22.854397 | orchestrator | 2026-04-17 00:45:22 | INFO  | Tasks are running in the background 2026-04-17 00:45:24.963634 | orchestrator | 2026-04-17 00:45:24 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-17 00:45:27.200956 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:27.201187 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:27.201998 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:27.202495 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:27.205822 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:27.205922 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:27.206442 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:27.207108 | orchestrator | 2026-04-17 00:45:27 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:27.207136 | orchestrator | 2026-04-17 00:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:30.249140 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:30.249217 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:30.249223 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:30.249228 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:30.249232 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:30.249236 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:30.249240 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:30.249244 | orchestrator | 2026-04-17 00:45:30 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:30.249249 | orchestrator | 2026-04-17 00:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:33.355777 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:33.355873 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:33.357014 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:33.357466 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:33.358070 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:33.358415 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:33.359197 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:33.362708 | orchestrator | 2026-04-17 00:45:33 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:33.362751 | orchestrator | 2026-04-17 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:36.497693 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:36.497781 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:36.497792 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:36.497823 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:36.498366 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:36.499861 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:36.499972 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:36.501256 | orchestrator | 2026-04-17 00:45:36 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:36.501473 | orchestrator | 2026-04-17 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:39.585838 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:39.589418 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:39.591761 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:39.592905 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:39.594722 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:39.597225 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:39.599453 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:39.601724 | orchestrator | 2026-04-17 00:45:39 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:39.601764 | orchestrator | 2026-04-17 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:42.796705 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:42.798973 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:42.800103 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:42.803640 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:42.804787 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:42.805206 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:42.806401 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:42.808198 | orchestrator | 2026-04-17 00:45:42 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:42.808339 | orchestrator | 2026-04-17 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:46.025072 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:46.025144 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:46.025149 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:46.025168 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:46.025188 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:46.025192 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:46.025196 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:46.025199 | orchestrator | 2026-04-17 00:45:45 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state STARTED 2026-04-17 00:45:46.025204 | orchestrator | 2026-04-17 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:49.405890 | orchestrator | 2026-04-17 00:45:49.406004 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-17 00:45:49.406073 | orchestrator | 2026-04-17 00:45:49.406085 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-17 00:45:49.406094 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:00.910) 0:00:00.910 ********** 2026-04-17 00:45:49.406103 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:45:49.406113 | orchestrator | changed: [testbed-manager] 2026-04-17 00:45:49.406122 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:45:49.406131 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:45:49.406140 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:45:49.406149 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:45:49.406157 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:45:49.406166 | orchestrator | 2026-04-17 00:45:49.406175 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-17 00:45:49.406184 | orchestrator | Friday 17 April 2026 00:45:38 +0000 (0:00:05.848) 0:00:06.758 ********** 2026-04-17 00:45:49.406193 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-17 00:45:49.406203 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-17 00:45:49.406213 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-17 00:45:49.406228 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-17 00:45:49.406243 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-17 00:45:49.406258 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-17 00:45:49.406307 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-17 00:45:49.406322 | orchestrator | 2026-04-17 00:45:49.406338 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-17 00:45:49.406354 | orchestrator | Friday 17 April 2026 00:45:41 +0000 (0:00:03.063) 0:00:09.822 ********** 2026-04-17 00:45:49.406373 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:39.723361', 'end': '2026-04-17 00:45:39.729974', 'delta': '0:00:00.006613', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406386 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:39.265165', 'end': '2026-04-17 00:45:39.270371', 'delta': '0:00:00.005206', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406431 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:39.652809', 'end': '2026-04-17 00:45:39.659242', 'delta': '0:00:00.006433', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406462 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:39.748301', 'end': '2026-04-17 00:45:39.756406', 'delta': '0:00:00.008105', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406479 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:39.296135', 'end': '2026-04-17 00:45:39.302223', 'delta': '0:00:00.006088', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406497 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:40.210653', 'end': '2026-04-17 00:45:40.215334', 'delta': '0:00:00.004681', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406519 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-17 00:45:40.151042', 'end': '2026-04-17 00:45:41.158088', 'delta': '0:00:01.007046', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-17 00:45:49.406546 | orchestrator | 2026-04-17 00:45:49.406561 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-17 00:45:49.406573 | orchestrator | Friday 17 April 2026 00:45:43 +0000 (0:00:01.861) 0:00:11.684 ********** 2026-04-17 00:45:49.406586 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-17 00:45:49.406600 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-17 00:45:49.406614 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-17 00:45:49.406628 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-17 00:45:49.406643 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-17 00:45:49.406658 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-17 00:45:49.406672 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-17 00:45:49.406686 | orchestrator | 2026-04-17 00:45:49.406701 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-17 00:45:49.406717 | orchestrator | Friday 17 April 2026 00:45:45 +0000 (0:00:02.327) 0:00:14.011 ********** 2026-04-17 00:45:49.406733 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-17 00:45:49.406749 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-17 00:45:49.406763 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-17 00:45:49.406777 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-17 00:45:49.406791 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-17 00:45:49.406805 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-17 00:45:49.406820 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-17 00:45:49.406834 | orchestrator | 2026-04-17 00:45:49.406848 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:45:49.406877 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.406895 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.406910 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.407312 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.407349 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.407364 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.407379 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:45:49.407393 | orchestrator | 2026-04-17 00:45:49.407407 | orchestrator | 2026-04-17 00:45:49.407421 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:45:49.407437 | orchestrator | Friday 17 April 2026 00:45:47 +0000 (0:00:01.447) 0:00:15.459 ********** 2026-04-17 00:45:49.407451 | orchestrator | =============================================================================== 2026-04-17 00:45:49.407465 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.85s 2026-04-17 00:45:49.407479 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.06s 2026-04-17 00:45:49.407512 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.33s 2026-04-17 00:45:49.407527 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.86s 2026-04-17 00:45:49.407543 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.45s 2026-04-17 00:45:49.407557 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:49.407573 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:49.407588 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:49.407603 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:45:49.407617 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:49.407631 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:49.407645 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:49.407666 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:49.407682 | orchestrator | 2026-04-17 00:45:49 | INFO  | Task 25b7317e-e237-4ab3-abb1-b269cfcba42d is in state SUCCESS 2026-04-17 00:45:49.407698 | orchestrator | 2026-04-17 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:52.438333 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:52.438462 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:52.438483 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:52.438495 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:45:52.438506 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:52.438517 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:52.438528 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:52.438539 | orchestrator | 2026-04-17 00:45:52 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:52.438550 | orchestrator | 2026-04-17 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:55.506585 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:55.506668 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:55.506675 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:55.506679 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:45:55.506684 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:55.506690 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:55.506728 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:55.506736 | orchestrator | 2026-04-17 00:45:55 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state STARTED 2026-04-17 00:45:55.506743 | orchestrator | 2026-04-17 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:45:58.565109 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:45:58.565196 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:45:58.565205 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:45:58.565212 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:45:58.570994 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:45:58.571071 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:45:58.576686 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:45:58.576762 | orchestrator | 2026-04-17 00:45:58 | INFO  | Task 41cee002-9bc6-48b7-ad57-03e49f39d201 is in state SUCCESS 2026-04-17 00:45:58.576771 | orchestrator | 2026-04-17 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:01.651387 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:01.651528 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:01.651538 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:01.651545 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:01.651551 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:46:01.651575 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:01.651583 | orchestrator | 2026-04-17 00:46:01 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:01.651590 | orchestrator | 2026-04-17 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:04.699057 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:04.699130 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:04.699825 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:04.700846 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:04.702390 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:46:04.706185 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:04.706783 | orchestrator | 2026-04-17 00:46:04 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:04.706824 | orchestrator | 2026-04-17 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:07.752061 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:07.752156 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:07.753491 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:07.756077 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:07.756136 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:46:07.757019 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:07.759694 | orchestrator | 2026-04-17 00:46:07 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:07.759742 | orchestrator | 2026-04-17 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:10.824056 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:10.837341 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:10.837402 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:10.837410 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:10.837415 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:46:10.837421 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:10.837426 | orchestrator | 2026-04-17 00:46:10 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:10.837432 | orchestrator | 2026-04-17 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:13.915989 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:13.916085 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:13.916097 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:13.916104 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:13.916111 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state STARTED 2026-04-17 00:46:13.916118 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:13.916125 | orchestrator | 2026-04-17 00:46:13 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:13.916132 | orchestrator | 2026-04-17 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:17.125021 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:17.125150 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:17.125804 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:17.126298 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:17.126688 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task a348ff4d-0622-4886-b215-12a09f98a552 is in state SUCCESS 2026-04-17 00:46:17.127368 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:17.128110 | orchestrator | 2026-04-17 00:46:17 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:17.128132 | orchestrator | 2026-04-17 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:20.277534 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:20.277665 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:20.277675 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state STARTED 2026-04-17 00:46:20.277682 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:20.277689 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:20.277695 | orchestrator | 2026-04-17 00:46:20 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:20.277703 | orchestrator | 2026-04-17 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:23.208426 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:23.210682 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:23.210897 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task aed1a972-715d-457f-acbd-b33671b463a3 is in state SUCCESS 2026-04-17 00:46:23.211783 | orchestrator | 2026-04-17 00:46:23.211811 | orchestrator | 2026-04-17 00:46:23.211817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:46:23.211824 | orchestrator | 2026-04-17 00:46:23.211829 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:46:23.211835 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-04-17 00:46:23.211841 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:23.211847 | orchestrator | 2026-04-17 00:46:23.211852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:46:23.211858 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.105) 0:00:00.444 ********** 2026-04-17 00:46:23.211864 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-17 00:46:23.211869 | orchestrator | 2026-04-17 00:46:23.211874 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-17 00:46:23.211879 | orchestrator | 2026-04-17 00:46:23.211885 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 00:46:23.211890 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.122) 0:00:00.566 ********** 2026-04-17 00:46:23.211895 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-04-17 00:46:23.211900 | orchestrator | 2026-04-17 00:46:23.211905 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-04-17 00:46:23.211910 | orchestrator | Friday 17 April 2026 00:43:47 +0000 (0:00:00.170) 0:00:00.737 ********** 2026-04-17 00:46:23.211918 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-04-17 00:46:23.211928 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-04-17 00:46:23.211936 | orchestrator | 2026-04-17 00:46:23.211945 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:46:23.211952 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:23.211981 | orchestrator | 2026-04-17 00:46:23.211990 | orchestrator | 2026-04-17 00:46:23.211999 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:46:23.212008 | orchestrator | Friday 17 April 2026 00:45:55 +0000 (0:02:08.047) 0:02:08.784 ********** 2026-04-17 00:46:23.212016 | orchestrator | =============================================================================== 2026-04-17 00:46:23.212024 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 128.05s 2026-04-17 00:46:23.212033 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.17s 2026-04-17 00:46:23.212042 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.12s 2026-04-17 00:46:23.212051 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.11s 2026-04-17 00:46:23.212059 | orchestrator | 2026-04-17 00:46:23.212067 | orchestrator | 2026-04-17 00:46:23.212098 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-17 00:46:23.212106 | orchestrator | 2026-04-17 00:46:23.212121 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-17 00:46:23.212131 | orchestrator | Friday 17 April 2026 00:45:33 +0000 (0:00:01.576) 0:00:01.576 ********** 2026-04-17 00:46:23.212136 | orchestrator | ok: [testbed-manager] => { 2026-04-17 00:46:23.212142 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-17 00:46:23.212149 | orchestrator | } 2026-04-17 00:46:23.212155 | orchestrator | 2026-04-17 00:46:23.212160 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-17 00:46:23.212165 | orchestrator | Friday 17 April 2026 00:45:34 +0000 (0:00:00.388) 0:00:01.965 ********** 2026-04-17 00:46:23.212171 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:23.212176 | orchestrator | 2026-04-17 00:46:23.212181 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-17 00:46:23.212186 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:02.702) 0:00:04.667 ********** 2026-04-17 00:46:23.212191 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-17 00:46:23.212196 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-17 00:46:23.212201 | orchestrator | 2026-04-17 00:46:23.212206 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-17 00:46:23.212211 | orchestrator | Friday 17 April 2026 00:45:38 +0000 (0:00:01.908) 0:00:06.575 ********** 2026-04-17 00:46:23.212216 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212222 | orchestrator | 2026-04-17 00:46:23.212227 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-17 00:46:23.212231 | orchestrator | Friday 17 April 2026 00:45:40 +0000 (0:00:02.309) 0:00:08.885 ********** 2026-04-17 00:46:23.212256 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212261 | orchestrator | 2026-04-17 00:46:23.212266 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-17 00:46:23.212271 | orchestrator | Friday 17 April 2026 00:45:42 +0000 (0:00:01.634) 0:00:10.519 ********** 2026-04-17 00:46:23.212276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-17 00:46:23.212281 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:23.212286 | orchestrator | 2026-04-17 00:46:23.212291 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-17 00:46:23.212296 | orchestrator | Friday 17 April 2026 00:46:10 +0000 (0:00:28.051) 0:00:38.571 ********** 2026-04-17 00:46:23.212301 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212306 | orchestrator | 2026-04-17 00:46:23.212311 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:46:23.212316 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:23.212321 | orchestrator | 2026-04-17 00:46:23.212326 | orchestrator | 2026-04-17 00:46:23.212332 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:46:23.212354 | orchestrator | Friday 17 April 2026 00:46:13 +0000 (0:00:03.012) 0:00:41.584 ********** 2026-04-17 00:46:23.212359 | orchestrator | =============================================================================== 2026-04-17 00:46:23.212365 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.05s 2026-04-17 00:46:23.212370 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.01s 2026-04-17 00:46:23.212375 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.70s 2026-04-17 00:46:23.212381 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.30s 2026-04-17 00:46:23.212387 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.92s 2026-04-17 00:46:23.212393 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.64s 2026-04-17 00:46:23.212398 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2026-04-17 00:46:23.212404 | orchestrator | 2026-04-17 00:46:23.212409 | orchestrator | 2026-04-17 00:46:23.212415 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-17 00:46:23.212421 | orchestrator | 2026-04-17 00:46:23.212427 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-17 00:46:23.212432 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:00.508) 0:00:00.508 ********** 2026-04-17 00:46:23.212438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-17 00:46:23.212446 | orchestrator | 2026-04-17 00:46:23.212451 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-17 00:46:23.212457 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:00.580) 0:00:01.089 ********** 2026-04-17 00:46:23.212462 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-17 00:46:23.212468 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-17 00:46:23.212474 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-17 00:46:23.212480 | orchestrator | 2026-04-17 00:46:23.212486 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-17 00:46:23.212491 | orchestrator | Friday 17 April 2026 00:45:34 +0000 (0:00:01.883) 0:00:02.972 ********** 2026-04-17 00:46:23.212497 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212503 | orchestrator | 2026-04-17 00:46:23.212509 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-17 00:46:23.212514 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:02.008) 0:00:04.980 ********** 2026-04-17 00:46:23.212520 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-17 00:46:23.212525 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:23.212531 | orchestrator | 2026-04-17 00:46:23.212540 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-17 00:46:23.212546 | orchestrator | Friday 17 April 2026 00:46:10 +0000 (0:00:34.062) 0:00:39.042 ********** 2026-04-17 00:46:23.212551 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212557 | orchestrator | 2026-04-17 00:46:23.212563 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-17 00:46:23.212569 | orchestrator | Friday 17 April 2026 00:46:13 +0000 (0:00:02.692) 0:00:41.735 ********** 2026-04-17 00:46:23.212574 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:23.212580 | orchestrator | 2026-04-17 00:46:23.212585 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-17 00:46:23.212591 | orchestrator | Friday 17 April 2026 00:46:14 +0000 (0:00:01.491) 0:00:43.226 ********** 2026-04-17 00:46:23.212595 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212600 | orchestrator | 2026-04-17 00:46:23.212605 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-17 00:46:23.212614 | orchestrator | Friday 17 April 2026 00:46:17 +0000 (0:00:02.850) 0:00:46.077 ********** 2026-04-17 00:46:23.212619 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212624 | orchestrator | 2026-04-17 00:46:23.212629 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-17 00:46:23.212634 | orchestrator | Friday 17 April 2026 00:46:18 +0000 (0:00:01.212) 0:00:47.290 ********** 2026-04-17 00:46:23.212639 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:23.212644 | orchestrator | 2026-04-17 00:46:23.212649 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-17 00:46:23.212654 | orchestrator | Friday 17 April 2026 00:46:20 +0000 (0:00:01.618) 0:00:48.909 ********** 2026-04-17 00:46:23.212659 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:23.212664 | orchestrator | 2026-04-17 00:46:23.212669 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:46:23.212674 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:23.212679 | orchestrator | 2026-04-17 00:46:23.212684 | orchestrator | 2026-04-17 00:46:23.212689 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:46:23.212694 | orchestrator | Friday 17 April 2026 00:46:21 +0000 (0:00:00.771) 0:00:49.680 ********** 2026-04-17 00:46:23.212699 | orchestrator | =============================================================================== 2026-04-17 00:46:23.212704 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.06s 2026-04-17 00:46:23.212709 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.85s 2026-04-17 00:46:23.212714 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.69s 2026-04-17 00:46:23.212719 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.01s 2026-04-17 00:46:23.212724 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.88s 2026-04-17 00:46:23.212733 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.62s 2026-04-17 00:46:23.212738 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.49s 2026-04-17 00:46:23.212743 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.21s 2026-04-17 00:46:23.212748 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.77s 2026-04-17 00:46:23.212753 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.58s 2026-04-17 00:46:23.214483 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:23.216948 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:23.224908 | orchestrator | 2026-04-17 00:46:23 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:23.226227 | orchestrator | 2026-04-17 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:26.317490 | orchestrator | 2026-04-17 00:46:26 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:26.320554 | orchestrator | 2026-04-17 00:46:26 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:26.327276 | orchestrator | 2026-04-17 00:46:26 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:26.327349 | orchestrator | 2026-04-17 00:46:26 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:26.328926 | orchestrator | 2026-04-17 00:46:26 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:26.328980 | orchestrator | 2026-04-17 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:29.381636 | orchestrator | 2026-04-17 00:46:29 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:29.382851 | orchestrator | 2026-04-17 00:46:29 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:29.383908 | orchestrator | 2026-04-17 00:46:29 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:29.384657 | orchestrator | 2026-04-17 00:46:29 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:29.385435 | orchestrator | 2026-04-17 00:46:29 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:29.385474 | orchestrator | 2026-04-17 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:32.440405 | orchestrator | 2026-04-17 00:46:32 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:32.443497 | orchestrator | 2026-04-17 00:46:32 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:32.446219 | orchestrator | 2026-04-17 00:46:32 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:32.449067 | orchestrator | 2026-04-17 00:46:32 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:32.453041 | orchestrator | 2026-04-17 00:46:32 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:32.453311 | orchestrator | 2026-04-17 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:35.496446 | orchestrator | 2026-04-17 00:46:35 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:35.496544 | orchestrator | 2026-04-17 00:46:35 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:35.496559 | orchestrator | 2026-04-17 00:46:35 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:35.498955 | orchestrator | 2026-04-17 00:46:35 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:35.499298 | orchestrator | 2026-04-17 00:46:35 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:35.499328 | orchestrator | 2026-04-17 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:38.546106 | orchestrator | 2026-04-17 00:46:38 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:38.546544 | orchestrator | 2026-04-17 00:46:38 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:38.547125 | orchestrator | 2026-04-17 00:46:38 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:38.547705 | orchestrator | 2026-04-17 00:46:38 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:38.548500 | orchestrator | 2026-04-17 00:46:38 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:38.548534 | orchestrator | 2026-04-17 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:41.594133 | orchestrator | 2026-04-17 00:46:41 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:41.596153 | orchestrator | 2026-04-17 00:46:41 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:41.596210 | orchestrator | 2026-04-17 00:46:41 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:41.597185 | orchestrator | 2026-04-17 00:46:41 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:41.598425 | orchestrator | 2026-04-17 00:46:41 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:41.598496 | orchestrator | 2026-04-17 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:44.629420 | orchestrator | 2026-04-17 00:46:44 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:44.630853 | orchestrator | 2026-04-17 00:46:44 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:44.630899 | orchestrator | 2026-04-17 00:46:44 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:44.630908 | orchestrator | 2026-04-17 00:46:44 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:44.631310 | orchestrator | 2026-04-17 00:46:44 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:44.631340 | orchestrator | 2026-04-17 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:47.673971 | orchestrator | 2026-04-17 00:46:47 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:47.674960 | orchestrator | 2026-04-17 00:46:47 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:47.675875 | orchestrator | 2026-04-17 00:46:47 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state STARTED 2026-04-17 00:46:47.678610 | orchestrator | 2026-04-17 00:46:47 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:47.678655 | orchestrator | 2026-04-17 00:46:47 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:47.678665 | orchestrator | 2026-04-17 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:50.728108 | orchestrator | 2026-04-17 00:46:50 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:50.732395 | orchestrator | 2026-04-17 00:46:50 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:50.733964 | orchestrator | 2026-04-17 00:46:50 | INFO  | Task a55686e6-3681-4972-a425-566f1c811b5c is in state SUCCESS 2026-04-17 00:46:50.735634 | orchestrator | 2026-04-17 00:46:50 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state STARTED 2026-04-17 00:46:50.737012 | orchestrator | 2026-04-17 00:46:50 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:50.737067 | orchestrator | 2026-04-17 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:53.768247 | orchestrator | 2026-04-17 00:46:53 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:53.768410 | orchestrator | 2026-04-17 00:46:53 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:53.768422 | orchestrator | 2026-04-17 00:46:53 | INFO  | Task 75a76109-f898-4baf-bab9-5e83b432fafa is in state SUCCESS 2026-04-17 00:46:53.768891 | orchestrator | 2026-04-17 00:46:53.768972 | orchestrator | 2026-04-17 00:46:53.768982 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-17 00:46:53.768991 | orchestrator | 2026-04-17 00:46:53.768997 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-17 00:46:53.769005 | orchestrator | Friday 17 April 2026 00:45:50 +0000 (0:00:00.299) 0:00:00.299 ********** 2026-04-17 00:46:53.769012 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769019 | orchestrator | 2026-04-17 00:46:53.769026 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-17 00:46:53.769033 | orchestrator | Friday 17 April 2026 00:45:51 +0000 (0:00:01.239) 0:00:01.539 ********** 2026-04-17 00:46:53.769040 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-17 00:46:53.769048 | orchestrator | 2026-04-17 00:46:53.769074 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-17 00:46:53.769079 | orchestrator | Friday 17 April 2026 00:45:52 +0000 (0:00:00.793) 0:00:02.332 ********** 2026-04-17 00:46:53.769083 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769088 | orchestrator | 2026-04-17 00:46:53.769092 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-17 00:46:53.769095 | orchestrator | Friday 17 April 2026 00:45:53 +0000 (0:00:01.139) 0:00:03.472 ********** 2026-04-17 00:46:53.769099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-17 00:46:53.769104 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769107 | orchestrator | 2026-04-17 00:46:53.769111 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-17 00:46:53.769115 | orchestrator | Friday 17 April 2026 00:46:45 +0000 (0:00:52.020) 0:00:55.492 ********** 2026-04-17 00:46:53.769119 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769123 | orchestrator | 2026-04-17 00:46:53.769127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:46:53.769131 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.769136 | orchestrator | 2026-04-17 00:46:53.769140 | orchestrator | 2026-04-17 00:46:53.769144 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:46:53.769148 | orchestrator | Friday 17 April 2026 00:46:48 +0000 (0:00:03.000) 0:00:58.492 ********** 2026-04-17 00:46:53.769151 | orchestrator | =============================================================================== 2026-04-17 00:46:53.769155 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.02s 2026-04-17 00:46:53.769159 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.00s 2026-04-17 00:46:53.769163 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.24s 2026-04-17 00:46:53.769167 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.14s 2026-04-17 00:46:53.769171 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.79s 2026-04-17 00:46:53.769175 | orchestrator | 2026-04-17 00:46:53.769179 | orchestrator | 2026-04-17 00:46:53.769182 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:46:53.769186 | orchestrator | 2026-04-17 00:46:53.769190 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:46:53.769194 | orchestrator | Friday 17 April 2026 00:45:33 +0000 (0:00:00.625) 0:00:00.625 ********** 2026-04-17 00:46:53.769197 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-17 00:46:53.769202 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-17 00:46:53.769205 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-17 00:46:53.769209 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-17 00:46:53.769235 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-17 00:46:53.769239 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-17 00:46:53.769253 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-17 00:46:53.769257 | orchestrator | 2026-04-17 00:46:53.769261 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-17 00:46:53.769264 | orchestrator | 2026-04-17 00:46:53.769268 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-17 00:46:53.769272 | orchestrator | Friday 17 April 2026 00:45:35 +0000 (0:00:02.312) 0:00:02.938 ********** 2026-04-17 00:46:53.769284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:46:53.769292 | orchestrator | 2026-04-17 00:46:53.769301 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-17 00:46:53.769304 | orchestrator | Friday 17 April 2026 00:45:37 +0000 (0:00:01.801) 0:00:04.740 ********** 2026-04-17 00:46:53.769308 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:53.769312 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769316 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:46:53.769320 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:46:53.769323 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:46:53.769327 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:46:53.769331 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:46:53.769334 | orchestrator | 2026-04-17 00:46:53.769338 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-17 00:46:53.769342 | orchestrator | Friday 17 April 2026 00:45:40 +0000 (0:00:03.014) 0:00:07.754 ********** 2026-04-17 00:46:53.769346 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:46:53.769350 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:46:53.769354 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:46:53.769357 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:46:53.769361 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:53.769365 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:46:53.769369 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769372 | orchestrator | 2026-04-17 00:46:53.769386 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-17 00:46:53.769391 | orchestrator | Friday 17 April 2026 00:45:43 +0000 (0:00:03.091) 0:00:10.845 ********** 2026-04-17 00:46:53.769395 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:46:53.769399 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:46:53.769403 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:46:53.769407 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:46:53.769411 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:46:53.769414 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:46:53.769418 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769422 | orchestrator | 2026-04-17 00:46:53.769426 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-17 00:46:53.769430 | orchestrator | Friday 17 April 2026 00:45:46 +0000 (0:00:03.184) 0:00:14.029 ********** 2026-04-17 00:46:53.769434 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:46:53.769438 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:46:53.769441 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:46:53.769445 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:46:53.769449 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:46:53.769453 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:46:53.769457 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769462 | orchestrator | 2026-04-17 00:46:53.769466 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-17 00:46:53.769471 | orchestrator | Friday 17 April 2026 00:45:56 +0000 (0:00:10.458) 0:00:24.488 ********** 2026-04-17 00:46:53.769475 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:46:53.769479 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:46:53.769484 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:46:53.769488 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:46:53.769492 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:46:53.769497 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:46:53.769501 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769505 | orchestrator | 2026-04-17 00:46:53.769510 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-17 00:46:53.769514 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:26.088) 0:00:50.576 ********** 2026-04-17 00:46:53.769520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:46:53.769525 | orchestrator | 2026-04-17 00:46:53.769529 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-17 00:46:53.769537 | orchestrator | Friday 17 April 2026 00:46:24 +0000 (0:00:01.306) 0:00:51.883 ********** 2026-04-17 00:46:53.769542 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-17 00:46:53.769547 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-17 00:46:53.769551 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-17 00:46:53.769555 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-17 00:46:53.769560 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-17 00:46:53.769564 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-17 00:46:53.769569 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-17 00:46:53.769573 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-17 00:46:53.769577 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-17 00:46:53.769582 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-17 00:46:53.769586 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-17 00:46:53.769590 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-17 00:46:53.769595 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-17 00:46:53.769601 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-17 00:46:53.769608 | orchestrator | 2026-04-17 00:46:53.769615 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-17 00:46:53.769627 | orchestrator | Friday 17 April 2026 00:46:29 +0000 (0:00:04.837) 0:00:56.720 ********** 2026-04-17 00:46:53.769637 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769648 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:46:53.769653 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:53.769659 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:46:53.769666 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:46:53.769673 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:46:53.769679 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:46:53.769685 | orchestrator | 2026-04-17 00:46:53.769691 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-17 00:46:53.769697 | orchestrator | Friday 17 April 2026 00:46:30 +0000 (0:00:01.508) 0:00:58.228 ********** 2026-04-17 00:46:53.769704 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769712 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:46:53.769719 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:46:53.769726 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:46:53.769732 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:46:53.769738 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:46:53.769744 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:46:53.769751 | orchestrator | 2026-04-17 00:46:53.769757 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-17 00:46:53.769763 | orchestrator | Friday 17 April 2026 00:46:32 +0000 (0:00:02.124) 0:01:00.353 ********** 2026-04-17 00:46:53.769770 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:46:53.769776 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:53.769784 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769791 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:46:53.769798 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:46:53.769804 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:46:53.769812 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:46:53.769818 | orchestrator | 2026-04-17 00:46:53.769825 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-17 00:46:53.769832 | orchestrator | Friday 17 April 2026 00:46:34 +0000 (0:00:01.824) 0:01:02.178 ********** 2026-04-17 00:46:53.769838 | orchestrator | ok: [testbed-manager] 2026-04-17 00:46:53.769844 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:46:53.769856 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:46:53.769864 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:46:53.769871 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:46:53.769877 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:46:53.769891 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:46:53.769898 | orchestrator | 2026-04-17 00:46:53.769905 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-17 00:46:53.769911 | orchestrator | Friday 17 April 2026 00:46:36 +0000 (0:00:01.988) 0:01:04.166 ********** 2026-04-17 00:46:53.769915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-17 00:46:53.769921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:46:53.769926 | orchestrator | 2026-04-17 00:46:53.769930 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-17 00:46:53.769933 | orchestrator | Friday 17 April 2026 00:46:37 +0000 (0:00:01.231) 0:01:05.397 ********** 2026-04-17 00:46:53.769937 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769941 | orchestrator | 2026-04-17 00:46:53.769945 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-17 00:46:53.769949 | orchestrator | Friday 17 April 2026 00:46:39 +0000 (0:00:01.630) 0:01:07.028 ********** 2026-04-17 00:46:53.769953 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:46:53.769956 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:46:53.769960 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:46:53.769964 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:46:53.769968 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:46:53.769971 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:46:53.769975 | orchestrator | changed: [testbed-manager] 2026-04-17 00:46:53.769979 | orchestrator | 2026-04-17 00:46:53.769983 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:46:53.769986 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.769991 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.769995 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.769999 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.770003 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.770006 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.770010 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:46:53.770207 | orchestrator | 2026-04-17 00:46:53.770271 | orchestrator | 2026-04-17 00:46:53.770276 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:46:53.770281 | orchestrator | Friday 17 April 2026 00:46:50 +0000 (0:00:11.222) 0:01:18.251 ********** 2026-04-17 00:46:53.770285 | orchestrator | =============================================================================== 2026-04-17 00:46:53.770288 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 26.09s 2026-04-17 00:46:53.770297 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.22s 2026-04-17 00:46:53.770302 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.46s 2026-04-17 00:46:53.770306 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.84s 2026-04-17 00:46:53.770310 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.18s 2026-04-17 00:46:53.770314 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.09s 2026-04-17 00:46:53.770324 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.01s 2026-04-17 00:46:53.770328 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.31s 2026-04-17 00:46:53.770332 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.12s 2026-04-17 00:46:53.770336 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.99s 2026-04-17 00:46:53.770339 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.82s 2026-04-17 00:46:53.770343 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.80s 2026-04-17 00:46:53.770347 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.63s 2026-04-17 00:46:53.770351 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.51s 2026-04-17 00:46:53.770355 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.31s 2026-04-17 00:46:53.770359 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.23s 2026-04-17 00:46:53.770367 | orchestrator | 2026-04-17 00:46:53 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:53.770372 | orchestrator | 2026-04-17 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:56.811390 | orchestrator | 2026-04-17 00:46:56 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:56.813603 | orchestrator | 2026-04-17 00:46:56 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:56.814932 | orchestrator | 2026-04-17 00:46:56 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:56.815194 | orchestrator | 2026-04-17 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:46:59.864584 | orchestrator | 2026-04-17 00:46:59 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:46:59.864678 | orchestrator | 2026-04-17 00:46:59 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:46:59.869442 | orchestrator | 2026-04-17 00:46:59 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:46:59.869524 | orchestrator | 2026-04-17 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:02.914418 | orchestrator | 2026-04-17 00:47:02 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:02.914700 | orchestrator | 2026-04-17 00:47:02 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:02.916354 | orchestrator | 2026-04-17 00:47:02 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:02.916400 | orchestrator | 2026-04-17 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:05.995999 | orchestrator | 2026-04-17 00:47:05 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:05.997110 | orchestrator | 2026-04-17 00:47:05 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:05.999542 | orchestrator | 2026-04-17 00:47:05 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:05.999597 | orchestrator | 2026-04-17 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:09.042088 | orchestrator | 2026-04-17 00:47:09 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:09.045815 | orchestrator | 2026-04-17 00:47:09 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:09.047415 | orchestrator | 2026-04-17 00:47:09 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:09.047477 | orchestrator | 2026-04-17 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:12.093139 | orchestrator | 2026-04-17 00:47:12 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:12.099671 | orchestrator | 2026-04-17 00:47:12 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:12.099730 | orchestrator | 2026-04-17 00:47:12 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:12.099750 | orchestrator | 2026-04-17 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:15.149526 | orchestrator | 2026-04-17 00:47:15 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:15.151026 | orchestrator | 2026-04-17 00:47:15 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:15.153239 | orchestrator | 2026-04-17 00:47:15 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:15.153816 | orchestrator | 2026-04-17 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:18.201529 | orchestrator | 2026-04-17 00:47:18 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:18.202608 | orchestrator | 2026-04-17 00:47:18 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:18.204999 | orchestrator | 2026-04-17 00:47:18 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:18.205320 | orchestrator | 2026-04-17 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:21.246852 | orchestrator | 2026-04-17 00:47:21 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:21.247778 | orchestrator | 2026-04-17 00:47:21 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:21.248781 | orchestrator | 2026-04-17 00:47:21 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:21.249130 | orchestrator | 2026-04-17 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:24.284885 | orchestrator | 2026-04-17 00:47:24 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:24.289895 | orchestrator | 2026-04-17 00:47:24 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:24.291461 | orchestrator | 2026-04-17 00:47:24 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:24.291515 | orchestrator | 2026-04-17 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:27.341153 | orchestrator | 2026-04-17 00:47:27 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:27.342010 | orchestrator | 2026-04-17 00:47:27 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:27.343169 | orchestrator | 2026-04-17 00:47:27 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:27.343221 | orchestrator | 2026-04-17 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:30.389606 | orchestrator | 2026-04-17 00:47:30 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:30.389665 | orchestrator | 2026-04-17 00:47:30 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:30.389671 | orchestrator | 2026-04-17 00:47:30 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:30.389688 | orchestrator | 2026-04-17 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:33.423520 | orchestrator | 2026-04-17 00:47:33 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:33.425085 | orchestrator | 2026-04-17 00:47:33 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:33.427494 | orchestrator | 2026-04-17 00:47:33 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:33.427626 | orchestrator | 2026-04-17 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:36.466707 | orchestrator | 2026-04-17 00:47:36 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:36.468476 | orchestrator | 2026-04-17 00:47:36 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:36.470516 | orchestrator | 2026-04-17 00:47:36 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:36.470559 | orchestrator | 2026-04-17 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:39.505213 | orchestrator | 2026-04-17 00:47:39 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:39.505852 | orchestrator | 2026-04-17 00:47:39 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:39.507465 | orchestrator | 2026-04-17 00:47:39 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:39.507501 | orchestrator | 2026-04-17 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:42.556412 | orchestrator | 2026-04-17 00:47:42 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state STARTED 2026-04-17 00:47:42.556920 | orchestrator | 2026-04-17 00:47:42 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:42.558347 | orchestrator | 2026-04-17 00:47:42 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:42.558385 | orchestrator | 2026-04-17 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:45.596914 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task f05d68fb-ccce-482f-9074-53872d74ad2e is in state SUCCESS 2026-04-17 00:47:45.598361 | orchestrator | 2026-04-17 00:47:45.598451 | orchestrator | 2026-04-17 00:47:45.598459 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-17 00:47:45.598463 | orchestrator | 2026-04-17 00:47:45.598466 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 00:47:45.598470 | orchestrator | Friday 17 April 2026 00:45:26 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-04-17 00:47:45.598474 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:47:45.598478 | orchestrator | 2026-04-17 00:47:45.598483 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-17 00:47:45.598488 | orchestrator | Friday 17 April 2026 00:45:27 +0000 (0:00:01.039) 0:00:01.336 ********** 2026-04-17 00:47:45.598493 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598499 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598504 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598509 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598515 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598520 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.598539 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.598545 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.598550 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.599037 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-17 00:47:45.599050 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.599056 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.599061 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599067 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599072 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.599082 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599088 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599093 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-17 00:47:45.599099 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599104 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599109 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-17 00:47:45.599114 | orchestrator | 2026-04-17 00:47:45.599120 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-17 00:47:45.599125 | orchestrator | Friday 17 April 2026 00:45:30 +0000 (0:00:03.665) 0:00:05.002 ********** 2026-04-17 00:47:45.599130 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:47:45.599136 | orchestrator | 2026-04-17 00:47:45.599141 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-17 00:47:45.599146 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:01.154) 0:00:06.156 ********** 2026-04-17 00:47:45.599157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599260 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.599300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599366 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599445 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.599490 | orchestrator | 2026-04-17 00:47:45.599496 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-17 00:47:45.599501 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:03.905) 0:00:10.062 ********** 2026-04-17 00:47:45.599507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599578 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:47:45.599584 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:47:45.599589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599634 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:47:45.599640 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:47:45.599649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599672 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:47:45.599678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599689 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:47:45.599697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599721 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:47:45.599727 | orchestrator | 2026-04-17 00:47:45.599732 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-17 00:47:45.599738 | orchestrator | Friday 17 April 2026 00:45:37 +0000 (0:00:01.633) 0:00:11.695 ********** 2026-04-17 00:47:45.599744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599750 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599781 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:47:45.599792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599809 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:47:45.599815 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:47:45.599821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599866 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:47:45.599872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599889 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:47:45.599895 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:47:45.599900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-17 00:47:45.599911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.599924 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:47:45.599930 | orchestrator | 2026-04-17 00:47:45.599935 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-17 00:47:45.599941 | orchestrator | Friday 17 April 2026 00:45:40 +0000 (0:00:03.247) 0:00:14.943 ********** 2026-04-17 00:47:45.599946 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:47:45.599952 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:47:45.599958 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:47:45.599964 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:47:45.599969 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:47:45.599979 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:47:45.599985 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:47:45.599990 | orchestrator | 2026-04-17 00:47:45.599996 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-17 00:47:45.600002 | orchestrator | Friday 17 April 2026 00:45:42 +0000 (0:00:01.576) 0:00:16.520 ********** 2026-04-17 00:47:45.600007 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:47:45.600013 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:47:45.600018 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:47:45.600024 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:47:45.600030 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:47:45.600036 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:47:45.600041 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:47:45.600047 | orchestrator | 2026-04-17 00:47:45.600052 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-17 00:47:45.600058 | orchestrator | Friday 17 April 2026 00:45:43 +0000 (0:00:00.759) 0:00:17.280 ********** 2026-04-17 00:47:45.600064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600079 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600187 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600210 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600216 | orchestrator | 2026-04-17 00:47:45.600221 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-17 00:47:45.600228 | orchestrator | Friday 17 April 2026 00:45:49 +0000 (0:00:06.686) 0:00:23.966 ********** 2026-04-17 00:47:45.600234 | orchestrator | [WARNING]: Skipped 2026-04-17 00:47:45.600240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-17 00:47:45.600297 | orchestrator | to this access issue: 2026-04-17 00:47:45.600303 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-17 00:47:45.600309 | orchestrator | directory 2026-04-17 00:47:45.600315 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:47:45.600321 | orchestrator | 2026-04-17 00:47:45.600327 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-17 00:47:45.600332 | orchestrator | Friday 17 April 2026 00:45:51 +0000 (0:00:01.421) 0:00:25.387 ********** 2026-04-17 00:47:45.600338 | orchestrator | [WARNING]: Skipped 2026-04-17 00:47:45.600344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-17 00:47:45.600352 | orchestrator | to this access issue: 2026-04-17 00:47:45.600358 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-17 00:47:45.600364 | orchestrator | directory 2026-04-17 00:47:45.600371 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:47:45.600376 | orchestrator | 2026-04-17 00:47:45.600382 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-17 00:47:45.600392 | orchestrator | Friday 17 April 2026 00:45:52 +0000 (0:00:01.189) 0:00:26.577 ********** 2026-04-17 00:47:45.600397 | orchestrator | [WARNING]: Skipped 2026-04-17 00:47:45.600402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-17 00:47:45.600408 | orchestrator | to this access issue: 2026-04-17 00:47:45.600413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-17 00:47:45.600419 | orchestrator | directory 2026-04-17 00:47:45.600424 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:47:45.600430 | orchestrator | 2026-04-17 00:47:45.600435 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-17 00:47:45.600441 | orchestrator | Friday 17 April 2026 00:45:53 +0000 (0:00:00.655) 0:00:27.232 ********** 2026-04-17 00:47:45.600446 | orchestrator | [WARNING]: Skipped 2026-04-17 00:47:45.600452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-17 00:47:45.600457 | orchestrator | to this access issue: 2026-04-17 00:47:45.600463 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-17 00:47:45.600468 | orchestrator | directory 2026-04-17 00:47:45.600473 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 00:47:45.600479 | orchestrator | 2026-04-17 00:47:45.600484 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-17 00:47:45.600489 | orchestrator | Friday 17 April 2026 00:45:54 +0000 (0:00:00.872) 0:00:28.105 ********** 2026-04-17 00:47:45.600495 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.600500 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.600506 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.600511 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.600517 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.600522 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.600528 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.600534 | orchestrator | 2026-04-17 00:47:45.600539 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-17 00:47:45.600544 | orchestrator | Friday 17 April 2026 00:45:57 +0000 (0:00:03.375) 0:00:31.481 ********** 2026-04-17 00:47:45.600550 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600562 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600578 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600584 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-17 00:47:45.600589 | orchestrator | 2026-04-17 00:47:45.600594 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-17 00:47:45.600600 | orchestrator | Friday 17 April 2026 00:46:00 +0000 (0:00:03.174) 0:00:34.656 ********** 2026-04-17 00:47:45.600606 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.600611 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.600617 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.600622 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.600628 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.600633 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.600638 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.600644 | orchestrator | 2026-04-17 00:47:45.600649 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-17 00:47:45.600659 | orchestrator | Friday 17 April 2026 00:46:05 +0000 (0:00:04.713) 0:00:39.370 ********** 2026-04-17 00:47:45.600667 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600682 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600694 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600700 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600715 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600731 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600737 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600742 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600747 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600758 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600766 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600781 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:47:45.600793 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600798 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.600804 | orchestrator | 2026-04-17 00:47:45.600809 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-17 00:47:45.600814 | orchestrator | Friday 17 April 2026 00:46:08 +0000 (0:00:02.898) 0:00:42.268 ********** 2026-04-17 00:47:45.600819 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600829 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600834 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600845 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600850 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600855 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-17 00:47:45.600860 | orchestrator | 2026-04-17 00:47:45.600865 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-17 00:47:45.600871 | orchestrator | Friday 17 April 2026 00:46:12 +0000 (0:00:03.912) 0:00:46.181 ********** 2026-04-17 00:47:45.600876 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600886 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600891 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600896 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600901 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600906 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-17 00:47:45.600911 | orchestrator | 2026-04-17 00:47:45.600917 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-17 00:47:45.600922 | orchestrator | Friday 17 April 2026 00:46:14 +0000 (0:00:02.530) 0:00:48.711 ********** 2026-04-17 00:47:45.600930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600944 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.600950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.601002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.601017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.601025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-17 00:47:45.601049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601064 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601130 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:47:45.601142 | orchestrator | 2026-04-17 00:47:45.601148 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-17 00:47:45.601153 | orchestrator | Friday 17 April 2026 00:46:18 +0000 (0:00:03.987) 0:00:52.699 ********** 2026-04-17 00:47:45.601159 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.601164 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.601170 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.601185 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.601191 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.601196 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.601201 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.601206 | orchestrator | 2026-04-17 00:47:45.601211 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-17 00:47:45.601216 | orchestrator | Friday 17 April 2026 00:46:20 +0000 (0:00:02.012) 0:00:54.712 ********** 2026-04-17 00:47:45.601222 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.601227 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.601232 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.601237 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.601242 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.601247 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.601252 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.601257 | orchestrator | 2026-04-17 00:47:45.601263 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601268 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:01.781) 0:00:56.493 ********** 2026-04-17 00:47:45.601273 | orchestrator | 2026-04-17 00:47:45.601278 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601284 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.097) 0:00:56.591 ********** 2026-04-17 00:47:45.601289 | orchestrator | 2026-04-17 00:47:45.601294 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601299 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.072) 0:00:56.664 ********** 2026-04-17 00:47:45.601304 | orchestrator | 2026-04-17 00:47:45.601309 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601317 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.073) 0:00:56.737 ********** 2026-04-17 00:47:45.601322 | orchestrator | 2026-04-17 00:47:45.601327 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601333 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.074) 0:00:56.812 ********** 2026-04-17 00:47:45.601338 | orchestrator | 2026-04-17 00:47:45.601343 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601348 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.099) 0:00:56.912 ********** 2026-04-17 00:47:45.601353 | orchestrator | 2026-04-17 00:47:45.601359 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-17 00:47:45.601364 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:00.063) 0:00:56.975 ********** 2026-04-17 00:47:45.601372 | orchestrator | 2026-04-17 00:47:45.601377 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-17 00:47:45.601385 | orchestrator | Friday 17 April 2026 00:46:23 +0000 (0:00:00.089) 0:00:57.064 ********** 2026-04-17 00:47:45.601391 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.601396 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.601401 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.601406 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.601411 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.601416 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.601421 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.601427 | orchestrator | 2026-04-17 00:47:45.601432 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-17 00:47:45.601437 | orchestrator | Friday 17 April 2026 00:46:57 +0000 (0:00:34.506) 0:01:31.571 ********** 2026-04-17 00:47:45.601442 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.601447 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.601452 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.601457 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.601462 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.601467 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.601472 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.601477 | orchestrator | 2026-04-17 00:47:45.601483 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-17 00:47:45.601488 | orchestrator | Friday 17 April 2026 00:47:36 +0000 (0:00:38.559) 0:02:10.130 ********** 2026-04-17 00:47:45.601493 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:47:45.601498 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:47:45.601502 | orchestrator | ok: [testbed-manager] 2026-04-17 00:47:45.601505 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:47:45.601508 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:47:45.601511 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:47:45.601514 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:47:45.601517 | orchestrator | 2026-04-17 00:47:45.601520 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-17 00:47:45.601523 | orchestrator | Friday 17 April 2026 00:47:38 +0000 (0:00:02.250) 0:02:12.381 ********** 2026-04-17 00:47:45.601526 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:47:45.601529 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:47:45.601532 | orchestrator | changed: [testbed-manager] 2026-04-17 00:47:45.601535 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:47:45.601538 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:47:45.601541 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:47:45.601544 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:47:45.601547 | orchestrator | 2026-04-17 00:47:45.601550 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:47:45.601554 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601557 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601560 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601563 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601566 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601569 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601574 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 00:47:45.601577 | orchestrator | 2026-04-17 00:47:45.601580 | orchestrator | 2026-04-17 00:47:45.601583 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:47:45.601586 | orchestrator | Friday 17 April 2026 00:47:42 +0000 (0:00:04.242) 0:02:16.623 ********** 2026-04-17 00:47:45.601590 | orchestrator | =============================================================================== 2026-04-17 00:47:45.601593 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.56s 2026-04-17 00:47:45.601596 | orchestrator | common : Restart fluentd container ------------------------------------- 34.51s 2026-04-17 00:47:45.601599 | orchestrator | common : Copying over config.json files for services -------------------- 6.69s 2026-04-17 00:47:45.601602 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.71s 2026-04-17 00:47:45.601605 | orchestrator | common : Restart cron container ----------------------------------------- 4.24s 2026-04-17 00:47:45.601610 | orchestrator | common : Check common containers ---------------------------------------- 3.99s 2026-04-17 00:47:45.601613 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.91s 2026-04-17 00:47:45.601616 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.91s 2026-04-17 00:47:45.601619 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.67s 2026-04-17 00:47:45.601622 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.38s 2026-04-17 00:47:45.601625 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.25s 2026-04-17 00:47:45.601628 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.17s 2026-04-17 00:47:45.601631 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.90s 2026-04-17 00:47:45.601634 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.53s 2026-04-17 00:47:45.601639 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.25s 2026-04-17 00:47:45.601642 | orchestrator | common : Creating log volume -------------------------------------------- 2.01s 2026-04-17 00:47:45.601645 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.78s 2026-04-17 00:47:45.601648 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.63s 2026-04-17 00:47:45.601651 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.58s 2026-04-17 00:47:45.601654 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.42s 2026-04-17 00:47:45.601657 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:47:45.601661 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:45.601664 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:45.601667 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:47:45.601840 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task 16d0cb8e-076c-4d8e-8985-ec205cfa5572 is in state STARTED 2026-04-17 00:47:45.603443 | orchestrator | 2026-04-17 00:47:45 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:47:45.603474 | orchestrator | 2026-04-17 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:48.629060 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:47:48.629272 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:48.630382 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:48.630922 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:47:48.631608 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task 16d0cb8e-076c-4d8e-8985-ec205cfa5572 is in state STARTED 2026-04-17 00:47:48.632435 | orchestrator | 2026-04-17 00:47:48 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:47:48.632465 | orchestrator | 2026-04-17 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:51.670333 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:47:51.670541 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:51.671042 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:51.671977 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:47:51.672992 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task 16d0cb8e-076c-4d8e-8985-ec205cfa5572 is in state STARTED 2026-04-17 00:47:51.674130 | orchestrator | 2026-04-17 00:47:51 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:47:51.674157 | orchestrator | 2026-04-17 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:54.710350 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:47:54.712321 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:54.713728 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:54.713757 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:47:54.714103 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task 16d0cb8e-076c-4d8e-8985-ec205cfa5572 is in state STARTED 2026-04-17 00:47:54.714908 | orchestrator | 2026-04-17 00:47:54 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:47:54.714942 | orchestrator | 2026-04-17 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:47:57.751042 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:47:57.751520 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:47:57.753419 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:47:57.755061 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:47:57.756704 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:47:57.757654 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task 16d0cb8e-076c-4d8e-8985-ec205cfa5572 is in state SUCCESS 2026-04-17 00:47:57.759046 | orchestrator | 2026-04-17 00:47:57 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:47:57.759078 | orchestrator | 2026-04-17 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:00.790129 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:00.795532 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:00.795918 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:00.799535 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:00.800117 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:48:00.803311 | orchestrator | 2026-04-17 00:48:00 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:00.803340 | orchestrator | 2026-04-17 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:03.854260 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:03.854374 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:03.854390 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:03.854907 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:03.855786 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:48:03.856300 | orchestrator | 2026-04-17 00:48:03 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:03.856493 | orchestrator | 2026-04-17 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:06.937905 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:06.938072 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:06.940850 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:06.941151 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:06.941890 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state STARTED 2026-04-17 00:48:06.943624 | orchestrator | 2026-04-17 00:48:06 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:06.943645 | orchestrator | 2026-04-17 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:10.003824 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:10.003917 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:10.003929 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:10.006566 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:10.007966 | orchestrator | 2026-04-17 00:48:10.008016 | orchestrator | 2026-04-17 00:48:10.008026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:48:10.008035 | orchestrator | 2026-04-17 00:48:10.008050 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:48:10.008064 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.308) 0:00:00.308 ********** 2026-04-17 00:48:10.008078 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:48:10.008094 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:48:10.008108 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:48:10.008178 | orchestrator | 2026-04-17 00:48:10.008193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:48:10.008201 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.280) 0:00:00.588 ********** 2026-04-17 00:48:10.008209 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-17 00:48:10.008218 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-17 00:48:10.008225 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-17 00:48:10.008233 | orchestrator | 2026-04-17 00:48:10.008241 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-17 00:48:10.008249 | orchestrator | 2026-04-17 00:48:10.008257 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-17 00:48:10.008265 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.260) 0:00:00.849 ********** 2026-04-17 00:48:10.008273 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:48:10.008282 | orchestrator | 2026-04-17 00:48:10.008289 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-17 00:48:10.008297 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.445) 0:00:01.294 ********** 2026-04-17 00:48:10.008305 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-17 00:48:10.008313 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-17 00:48:10.008321 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-17 00:48:10.008329 | orchestrator | 2026-04-17 00:48:10.008337 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-17 00:48:10.008344 | orchestrator | Friday 17 April 2026 00:47:48 +0000 (0:00:01.413) 0:00:02.707 ********** 2026-04-17 00:48:10.008352 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-17 00:48:10.008361 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-17 00:48:10.008369 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-17 00:48:10.008376 | orchestrator | 2026-04-17 00:48:10.008384 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-17 00:48:10.008396 | orchestrator | Friday 17 April 2026 00:47:50 +0000 (0:00:01.373) 0:00:04.080 ********** 2026-04-17 00:48:10.008409 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:48:10.008423 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:48:10.008435 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:48:10.008447 | orchestrator | 2026-04-17 00:48:10.008461 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-17 00:48:10.008476 | orchestrator | Friday 17 April 2026 00:47:52 +0000 (0:00:02.093) 0:00:06.174 ********** 2026-04-17 00:48:10.008490 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:48:10.008503 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:48:10.008517 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:48:10.008526 | orchestrator | 2026-04-17 00:48:10.008534 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:48:10.008542 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.008551 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.008560 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.008570 | orchestrator | 2026-04-17 00:48:10.008579 | orchestrator | 2026-04-17 00:48:10.008588 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:48:10.008597 | orchestrator | Friday 17 April 2026 00:47:54 +0000 (0:00:02.758) 0:00:08.933 ********** 2026-04-17 00:48:10.008606 | orchestrator | =============================================================================== 2026-04-17 00:48:10.008614 | orchestrator | memcached : Restart memcached container --------------------------------- 2.76s 2026-04-17 00:48:10.008631 | orchestrator | memcached : Check memcached container ----------------------------------- 2.09s 2026-04-17 00:48:10.008640 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.41s 2026-04-17 00:48:10.008649 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.37s 2026-04-17 00:48:10.008658 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.45s 2026-04-17 00:48:10.008666 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-17 00:48:10.008675 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.26s 2026-04-17 00:48:10.008684 | orchestrator | 2026-04-17 00:48:10.008692 | orchestrator | 2026-04-17 00:48:10.008701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:48:10.008711 | orchestrator | 2026-04-17 00:48:10.008720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:48:10.008728 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.387) 0:00:00.387 ********** 2026-04-17 00:48:10.008737 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:48:10.008746 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:48:10.008754 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:48:10.008763 | orchestrator | 2026-04-17 00:48:10.008783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:48:10.008807 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.291) 0:00:00.679 ********** 2026-04-17 00:48:10.008816 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-17 00:48:10.008825 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-17 00:48:10.008834 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-17 00:48:10.008843 | orchestrator | 2026-04-17 00:48:10.008851 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-17 00:48:10.008860 | orchestrator | 2026-04-17 00:48:10.008869 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-17 00:48:10.008878 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.266) 0:00:00.945 ********** 2026-04-17 00:48:10.008886 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:48:10.008895 | orchestrator | 2026-04-17 00:48:10.008904 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-17 00:48:10.008913 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.385) 0:00:01.330 ********** 2026-04-17 00:48:10.008925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.008998 | orchestrator | 2026-04-17 00:48:10.009006 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-17 00:48:10.009014 | orchestrator | Friday 17 April 2026 00:47:49 +0000 (0:00:01.880) 0:00:03.211 ********** 2026-04-17 00:48:10.009022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009082 | orchestrator | 2026-04-17 00:48:10.009090 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-17 00:48:10.009098 | orchestrator | Friday 17 April 2026 00:47:51 +0000 (0:00:02.224) 0:00:05.436 ********** 2026-04-17 00:48:10.009106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009209 | orchestrator | 2026-04-17 00:48:10.009227 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-17 00:48:10.009236 | orchestrator | Friday 17 April 2026 00:47:53 +0000 (0:00:02.376) 0:00:07.812 ********** 2026-04-17 00:48:10.009244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-17 00:48:10.009317 | orchestrator | 2026-04-17 00:48:10.009329 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 00:48:10.009341 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:01.471) 0:00:09.283 ********** 2026-04-17 00:48:10.009354 | orchestrator | 2026-04-17 00:48:10.009373 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 00:48:10.009393 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:00.222) 0:00:09.506 ********** 2026-04-17 00:48:10.009406 | orchestrator | 2026-04-17 00:48:10.009420 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-17 00:48:10.009434 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:00.062) 0:00:09.569 ********** 2026-04-17 00:48:10.009447 | orchestrator | 2026-04-17 00:48:10.009459 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-17 00:48:10.009473 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:00.063) 0:00:09.632 ********** 2026-04-17 00:48:10.009486 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:48:10.009499 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:48:10.009513 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:48:10.009528 | orchestrator | 2026-04-17 00:48:10.009541 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-17 00:48:10.009555 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:06.764) 0:00:16.396 ********** 2026-04-17 00:48:10.009567 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:48:10.009580 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:48:10.009605 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:48:10.009618 | orchestrator | 2026-04-17 00:48:10.009628 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:48:10.009637 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.009645 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.009653 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:48:10.009660 | orchestrator | 2026-04-17 00:48:10.009668 | orchestrator | 2026-04-17 00:48:10.009676 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:48:10.009683 | orchestrator | Friday 17 April 2026 00:48:07 +0000 (0:00:05.634) 0:00:22.031 ********** 2026-04-17 00:48:10.009691 | orchestrator | =============================================================================== 2026-04-17 00:48:10.009698 | orchestrator | redis : Restart redis container ----------------------------------------- 6.76s 2026-04-17 00:48:10.009706 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.63s 2026-04-17 00:48:10.009714 | orchestrator | redis : Copying over redis config files --------------------------------- 2.38s 2026-04-17 00:48:10.009721 | orchestrator | redis : Copying over default config.json files -------------------------- 2.22s 2026-04-17 00:48:10.009729 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.88s 2026-04-17 00:48:10.009737 | orchestrator | redis : Check redis containers ------------------------------------------ 1.47s 2026-04-17 00:48:10.009744 | orchestrator | redis : include_tasks --------------------------------------------------- 0.39s 2026-04-17 00:48:10.009752 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.35s 2026-04-17 00:48:10.009759 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-04-17 00:48:10.009767 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-17 00:48:10.009775 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task 27b6e37e-4e2f-4bef-ba2d-3fc5d7c622fb is in state SUCCESS 2026-04-17 00:48:10.009783 | orchestrator | 2026-04-17 00:48:10 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:10.009791 | orchestrator | 2026-04-17 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:13.079189 | orchestrator | 2026-04-17 00:48:13 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:13.079273 | orchestrator | 2026-04-17 00:48:13 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:13.079280 | orchestrator | 2026-04-17 00:48:13 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:13.079287 | orchestrator | 2026-04-17 00:48:13 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:13.079292 | orchestrator | 2026-04-17 00:48:13 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:13.079296 | orchestrator | 2026-04-17 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:16.171527 | orchestrator | 2026-04-17 00:48:16 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:16.171639 | orchestrator | 2026-04-17 00:48:16 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:16.179193 | orchestrator | 2026-04-17 00:48:16 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:16.181807 | orchestrator | 2026-04-17 00:48:16 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:16.185686 | orchestrator | 2026-04-17 00:48:16 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:16.185759 | orchestrator | 2026-04-17 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:19.221079 | orchestrator | 2026-04-17 00:48:19 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:19.221224 | orchestrator | 2026-04-17 00:48:19 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:19.221237 | orchestrator | 2026-04-17 00:48:19 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:19.221244 | orchestrator | 2026-04-17 00:48:19 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:19.221251 | orchestrator | 2026-04-17 00:48:19 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:19.221258 | orchestrator | 2026-04-17 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:22.261391 | orchestrator | 2026-04-17 00:48:22 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:22.263351 | orchestrator | 2026-04-17 00:48:22 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:22.264998 | orchestrator | 2026-04-17 00:48:22 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:22.266340 | orchestrator | 2026-04-17 00:48:22 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:22.267509 | orchestrator | 2026-04-17 00:48:22 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:22.267633 | orchestrator | 2026-04-17 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:25.309509 | orchestrator | 2026-04-17 00:48:25 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:25.310892 | orchestrator | 2026-04-17 00:48:25 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:25.311513 | orchestrator | 2026-04-17 00:48:25 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:25.312820 | orchestrator | 2026-04-17 00:48:25 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:25.314535 | orchestrator | 2026-04-17 00:48:25 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:25.314587 | orchestrator | 2026-04-17 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:28.357029 | orchestrator | 2026-04-17 00:48:28 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:28.360186 | orchestrator | 2026-04-17 00:48:28 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:28.363375 | orchestrator | 2026-04-17 00:48:28 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:28.364590 | orchestrator | 2026-04-17 00:48:28 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:28.365645 | orchestrator | 2026-04-17 00:48:28 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:28.366868 | orchestrator | 2026-04-17 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:31.423689 | orchestrator | 2026-04-17 00:48:31 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:31.427403 | orchestrator | 2026-04-17 00:48:31 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:31.428490 | orchestrator | 2026-04-17 00:48:31 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:31.429448 | orchestrator | 2026-04-17 00:48:31 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:31.433425 | orchestrator | 2026-04-17 00:48:31 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:31.433520 | orchestrator | 2026-04-17 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:34.481648 | orchestrator | 2026-04-17 00:48:34 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:34.482807 | orchestrator | 2026-04-17 00:48:34 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:34.483472 | orchestrator | 2026-04-17 00:48:34 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:34.484492 | orchestrator | 2026-04-17 00:48:34 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:34.487450 | orchestrator | 2026-04-17 00:48:34 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:34.487493 | orchestrator | 2026-04-17 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:37.681617 | orchestrator | 2026-04-17 00:48:37 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:37.682196 | orchestrator | 2026-04-17 00:48:37 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:37.683094 | orchestrator | 2026-04-17 00:48:37 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:37.683809 | orchestrator | 2026-04-17 00:48:37 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:37.684508 | orchestrator | 2026-04-17 00:48:37 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:37.686735 | orchestrator | 2026-04-17 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:40.732577 | orchestrator | 2026-04-17 00:48:40 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:40.733836 | orchestrator | 2026-04-17 00:48:40 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:40.735038 | orchestrator | 2026-04-17 00:48:40 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:40.736154 | orchestrator | 2026-04-17 00:48:40 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:40.737367 | orchestrator | 2026-04-17 00:48:40 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:40.737423 | orchestrator | 2026-04-17 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:43.773954 | orchestrator | 2026-04-17 00:48:43 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:43.775692 | orchestrator | 2026-04-17 00:48:43 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:43.775722 | orchestrator | 2026-04-17 00:48:43 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:43.776862 | orchestrator | 2026-04-17 00:48:43 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:43.777828 | orchestrator | 2026-04-17 00:48:43 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:43.777848 | orchestrator | 2026-04-17 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:46.821209 | orchestrator | 2026-04-17 00:48:46 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:46.822400 | orchestrator | 2026-04-17 00:48:46 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:46.825752 | orchestrator | 2026-04-17 00:48:46 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:46.826871 | orchestrator | 2026-04-17 00:48:46 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:46.829013 | orchestrator | 2026-04-17 00:48:46 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:46.829053 | orchestrator | 2026-04-17 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:49.968477 | orchestrator | 2026-04-17 00:48:49 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:49.968523 | orchestrator | 2026-04-17 00:48:49 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:49.968530 | orchestrator | 2026-04-17 00:48:49 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:49.968535 | orchestrator | 2026-04-17 00:48:49 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:49.968540 | orchestrator | 2026-04-17 00:48:49 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:49.968546 | orchestrator | 2026-04-17 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:52.920939 | orchestrator | 2026-04-17 00:48:52 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:52.922496 | orchestrator | 2026-04-17 00:48:52 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:52.925078 | orchestrator | 2026-04-17 00:48:52 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:52.928105 | orchestrator | 2026-04-17 00:48:52 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:52.929384 | orchestrator | 2026-04-17 00:48:52 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:52.930410 | orchestrator | 2026-04-17 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:55.973966 | orchestrator | 2026-04-17 00:48:55 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:55.974484 | orchestrator | 2026-04-17 00:48:55 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:55.975283 | orchestrator | 2026-04-17 00:48:55 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:55.976127 | orchestrator | 2026-04-17 00:48:55 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:55.976911 | orchestrator | 2026-04-17 00:48:55 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:55.977009 | orchestrator | 2026-04-17 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:48:59.012546 | orchestrator | 2026-04-17 00:48:59 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:48:59.014428 | orchestrator | 2026-04-17 00:48:59 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:48:59.015802 | orchestrator | 2026-04-17 00:48:59 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:48:59.017690 | orchestrator | 2026-04-17 00:48:59 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:48:59.018795 | orchestrator | 2026-04-17 00:48:59 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:48:59.019057 | orchestrator | 2026-04-17 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:02.064803 | orchestrator | 2026-04-17 00:49:02 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:02.065331 | orchestrator | 2026-04-17 00:49:02 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:02.066085 | orchestrator | 2026-04-17 00:49:02 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:02.067655 | orchestrator | 2026-04-17 00:49:02 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:02.067681 | orchestrator | 2026-04-17 00:49:02 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:02.067688 | orchestrator | 2026-04-17 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:05.105268 | orchestrator | 2026-04-17 00:49:05 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:05.106167 | orchestrator | 2026-04-17 00:49:05 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:05.107072 | orchestrator | 2026-04-17 00:49:05 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:05.108010 | orchestrator | 2026-04-17 00:49:05 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:05.109398 | orchestrator | 2026-04-17 00:49:05 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:05.109786 | orchestrator | 2026-04-17 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:08.155525 | orchestrator | 2026-04-17 00:49:08 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:08.158150 | orchestrator | 2026-04-17 00:49:08 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:08.160727 | orchestrator | 2026-04-17 00:49:08 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:08.163563 | orchestrator | 2026-04-17 00:49:08 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:08.167620 | orchestrator | 2026-04-17 00:49:08 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:08.167675 | orchestrator | 2026-04-17 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:11.223494 | orchestrator | 2026-04-17 00:49:11 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:11.227503 | orchestrator | 2026-04-17 00:49:11 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:11.233016 | orchestrator | 2026-04-17 00:49:11 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:11.235418 | orchestrator | 2026-04-17 00:49:11 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:11.237529 | orchestrator | 2026-04-17 00:49:11 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:11.237577 | orchestrator | 2026-04-17 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:14.282942 | orchestrator | 2026-04-17 00:49:14 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:14.283491 | orchestrator | 2026-04-17 00:49:14 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:14.283937 | orchestrator | 2026-04-17 00:49:14 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:14.286183 | orchestrator | 2026-04-17 00:49:14 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:14.286865 | orchestrator | 2026-04-17 00:49:14 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:14.286890 | orchestrator | 2026-04-17 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:17.319597 | orchestrator | 2026-04-17 00:49:17 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:17.320765 | orchestrator | 2026-04-17 00:49:17 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:17.322193 | orchestrator | 2026-04-17 00:49:17 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:17.323458 | orchestrator | 2026-04-17 00:49:17 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:17.324721 | orchestrator | 2026-04-17 00:49:17 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:17.324805 | orchestrator | 2026-04-17 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:20.365046 | orchestrator | 2026-04-17 00:49:20 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:20.367146 | orchestrator | 2026-04-17 00:49:20 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:20.368523 | orchestrator | 2026-04-17 00:49:20 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:20.370347 | orchestrator | 2026-04-17 00:49:20 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:20.373647 | orchestrator | 2026-04-17 00:49:20 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:20.373699 | orchestrator | 2026-04-17 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:23.540195 | orchestrator | 2026-04-17 00:49:23 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:23.541243 | orchestrator | 2026-04-17 00:49:23 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:23.546443 | orchestrator | 2026-04-17 00:49:23 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:23.548025 | orchestrator | 2026-04-17 00:49:23 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:23.549643 | orchestrator | 2026-04-17 00:49:23 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:23.549698 | orchestrator | 2026-04-17 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:26.586263 | orchestrator | 2026-04-17 00:49:26 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:26.586338 | orchestrator | 2026-04-17 00:49:26 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:26.586344 | orchestrator | 2026-04-17 00:49:26 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:26.586349 | orchestrator | 2026-04-17 00:49:26 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:26.586353 | orchestrator | 2026-04-17 00:49:26 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:26.586358 | orchestrator | 2026-04-17 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:29.612998 | orchestrator | 2026-04-17 00:49:29 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:29.614313 | orchestrator | 2026-04-17 00:49:29 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:29.615516 | orchestrator | 2026-04-17 00:49:29 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:29.616299 | orchestrator | 2026-04-17 00:49:29 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:29.619471 | orchestrator | 2026-04-17 00:49:29 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:29.619517 | orchestrator | 2026-04-17 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:32.649434 | orchestrator | 2026-04-17 00:49:32 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:32.649776 | orchestrator | 2026-04-17 00:49:32 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:32.650540 | orchestrator | 2026-04-17 00:49:32 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:32.650750 | orchestrator | 2026-04-17 00:49:32 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:32.651486 | orchestrator | 2026-04-17 00:49:32 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:32.651506 | orchestrator | 2026-04-17 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:35.699887 | orchestrator | 2026-04-17 00:49:35 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:35.702431 | orchestrator | 2026-04-17 00:49:35 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:35.703034 | orchestrator | 2026-04-17 00:49:35 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:35.703980 | orchestrator | 2026-04-17 00:49:35 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:35.707382 | orchestrator | 2026-04-17 00:49:35 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:35.707434 | orchestrator | 2026-04-17 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:38.874945 | orchestrator | 2026-04-17 00:49:38 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:38.874998 | orchestrator | 2026-04-17 00:49:38 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:38.875007 | orchestrator | 2026-04-17 00:49:38 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:38.875012 | orchestrator | 2026-04-17 00:49:38 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:38.875018 | orchestrator | 2026-04-17 00:49:38 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:38.875024 | orchestrator | 2026-04-17 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:42.254317 | orchestrator | 2026-04-17 00:49:42 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:42.254485 | orchestrator | 2026-04-17 00:49:42 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:42.256774 | orchestrator | 2026-04-17 00:49:42 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:42.257284 | orchestrator | 2026-04-17 00:49:42 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:42.257794 | orchestrator | 2026-04-17 00:49:42 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:42.257823 | orchestrator | 2026-04-17 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:45.649339 | orchestrator | 2026-04-17 00:49:45 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:45.650942 | orchestrator | 2026-04-17 00:49:45 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:45.653327 | orchestrator | 2026-04-17 00:49:45 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:45.654469 | orchestrator | 2026-04-17 00:49:45 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:45.655541 | orchestrator | 2026-04-17 00:49:45 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:45.655585 | orchestrator | 2026-04-17 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:48.678961 | orchestrator | 2026-04-17 00:49:48 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:48.680747 | orchestrator | 2026-04-17 00:49:48 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:48.682236 | orchestrator | 2026-04-17 00:49:48 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:48.683405 | orchestrator | 2026-04-17 00:49:48 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state STARTED 2026-04-17 00:49:48.684331 | orchestrator | 2026-04-17 00:49:48 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:48.685318 | orchestrator | 2026-04-17 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:51.712644 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:51.712758 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:51.714166 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:51.716407 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task a45b85e8-d0ed-477b-808e-e6dafc71ed02 is in state STARTED 2026-04-17 00:49:51.718194 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task 6c2a7d0c-03cb-42ed-99d6-446310c80a8d is in state STARTED 2026-04-17 00:49:51.719203 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task 448bf28f-15f4-462d-a805-d6d91b6eda34 is in state SUCCESS 2026-04-17 00:49:51.723456 | orchestrator | 2026-04-17 00:49:51.723500 | orchestrator | 2026-04-17 00:49:51.723508 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-17 00:49:51.723516 | orchestrator | 2026-04-17 00:49:51.723522 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-17 00:49:51.723527 | orchestrator | Friday 17 April 2026 00:45:26 +0000 (0:00:00.268) 0:00:00.268 ********** 2026-04-17 00:49:51.723532 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.723537 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.723541 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.723545 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.723549 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.723553 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.723557 | orchestrator | 2026-04-17 00:49:51.723561 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-17 00:49:51.723565 | orchestrator | Friday 17 April 2026 00:45:27 +0000 (0:00:00.670) 0:00:00.938 ********** 2026-04-17 00:49:51.723569 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.723574 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.723578 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.723584 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.723590 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.723620 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.723626 | orchestrator | 2026-04-17 00:49:51.723633 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-17 00:49:51.723639 | orchestrator | Friday 17 April 2026 00:45:28 +0000 (0:00:00.799) 0:00:01.738 ********** 2026-04-17 00:49:51.723645 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.723651 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.723656 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.723662 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.723668 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.723674 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.723680 | orchestrator | 2026-04-17 00:49:51.723686 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-17 00:49:51.723692 | orchestrator | Friday 17 April 2026 00:45:29 +0000 (0:00:00.596) 0:00:02.335 ********** 2026-04-17 00:49:51.723698 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.723704 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.723709 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.723716 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.723721 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.723727 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.723733 | orchestrator | 2026-04-17 00:49:51.723739 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-17 00:49:51.723745 | orchestrator | Friday 17 April 2026 00:45:31 +0000 (0:00:02.036) 0:00:04.372 ********** 2026-04-17 00:49:51.723751 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.723756 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.723761 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.723767 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.723773 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.723778 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.723798 | orchestrator | 2026-04-17 00:49:51.723804 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-17 00:49:51.723810 | orchestrator | Friday 17 April 2026 00:45:31 +0000 (0:00:00.863) 0:00:05.235 ********** 2026-04-17 00:49:51.723815 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.723820 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.723825 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.723831 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.723836 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.723842 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.723848 | orchestrator | 2026-04-17 00:49:51.723854 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-17 00:49:51.723860 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:00.980) 0:00:06.216 ********** 2026-04-17 00:49:51.723865 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.723870 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.723875 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.723881 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.723886 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.723892 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.723897 | orchestrator | 2026-04-17 00:49:51.723903 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-17 00:49:51.723909 | orchestrator | Friday 17 April 2026 00:45:33 +0000 (0:00:00.875) 0:00:07.091 ********** 2026-04-17 00:49:51.723927 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.723933 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.723939 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.723944 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.723950 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.723956 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.723962 | orchestrator | 2026-04-17 00:49:51.723968 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-17 00:49:51.723983 | orchestrator | Friday 17 April 2026 00:45:34 +0000 (0:00:00.934) 0:00:08.026 ********** 2026-04-17 00:49:51.723989 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.723995 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724001 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724007 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.724014 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724020 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724025 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.724031 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724037 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724043 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.724062 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724069 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724075 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.724114 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724120 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 00:49:51.724127 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724133 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 00:49:51.724137 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724142 | orchestrator | 2026-04-17 00:49:51.724146 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-17 00:49:51.724150 | orchestrator | Friday 17 April 2026 00:45:35 +0000 (0:00:00.917) 0:00:08.944 ********** 2026-04-17 00:49:51.724155 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724159 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724163 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724167 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724172 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724176 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724180 | orchestrator | 2026-04-17 00:49:51.724184 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-17 00:49:51.724189 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:01.202) 0:00:10.146 ********** 2026-04-17 00:49:51.724194 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.724199 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.724203 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.724207 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724211 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724215 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724219 | orchestrator | 2026-04-17 00:49:51.724223 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-17 00:49:51.724228 | orchestrator | Friday 17 April 2026 00:45:37 +0000 (0:00:00.642) 0:00:10.789 ********** 2026-04-17 00:49:51.724232 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.724236 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.724241 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.724245 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.724249 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.724253 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.724257 | orchestrator | 2026-04-17 00:49:51.724261 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-17 00:49:51.724265 | orchestrator | Friday 17 April 2026 00:45:43 +0000 (0:00:06.275) 0:00:17.064 ********** 2026-04-17 00:49:51.724275 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724279 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724283 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724287 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724291 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724295 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724299 | orchestrator | 2026-04-17 00:49:51.724303 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-17 00:49:51.724307 | orchestrator | Friday 17 April 2026 00:45:44 +0000 (0:00:01.091) 0:00:18.156 ********** 2026-04-17 00:49:51.724312 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724316 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724320 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724324 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724328 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724332 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724336 | orchestrator | 2026-04-17 00:49:51.724341 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-17 00:49:51.724346 | orchestrator | Friday 17 April 2026 00:45:46 +0000 (0:00:01.636) 0:00:19.793 ********** 2026-04-17 00:49:51.724350 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724355 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724359 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724363 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724367 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724371 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724375 | orchestrator | 2026-04-17 00:49:51.724379 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-17 00:49:51.724384 | orchestrator | Friday 17 April 2026 00:45:48 +0000 (0:00:01.824) 0:00:21.618 ********** 2026-04-17 00:49:51.724388 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-17 00:49:51.724393 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-17 00:49:51.724397 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724403 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-17 00:49:51.724410 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-17 00:49:51.724416 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-17 00:49:51.724421 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-17 00:49:51.724427 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724432 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-17 00:49:51.724438 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-17 00:49:51.724444 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724450 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-17 00:49:51.724456 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-17 00:49:51.724462 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724467 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724473 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-17 00:49:51.724479 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-17 00:49:51.724484 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724490 | orchestrator | 2026-04-17 00:49:51.724496 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-17 00:49:51.724509 | orchestrator | Friday 17 April 2026 00:45:49 +0000 (0:00:00.893) 0:00:22.511 ********** 2026-04-17 00:49:51.724516 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724522 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724528 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724534 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724546 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724558 | orchestrator | 2026-04-17 00:49:51.724562 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-17 00:49:51.724566 | orchestrator | Friday 17 April 2026 00:45:50 +0000 (0:00:01.201) 0:00:23.712 ********** 2026-04-17 00:49:51.724570 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.724574 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.724578 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.724581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724585 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724589 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724593 | orchestrator | 2026-04-17 00:49:51.724597 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-17 00:49:51.724600 | orchestrator | 2026-04-17 00:49:51.724604 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-17 00:49:51.724608 | orchestrator | Friday 17 April 2026 00:45:51 +0000 (0:00:01.400) 0:00:25.113 ********** 2026-04-17 00:49:51.724612 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724616 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724619 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724623 | orchestrator | 2026-04-17 00:49:51.724627 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-17 00:49:51.724630 | orchestrator | Friday 17 April 2026 00:45:53 +0000 (0:00:01.391) 0:00:26.505 ********** 2026-04-17 00:49:51.724634 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724638 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724641 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724645 | orchestrator | 2026-04-17 00:49:51.724649 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-17 00:49:51.724653 | orchestrator | Friday 17 April 2026 00:45:54 +0000 (0:00:01.189) 0:00:27.694 ********** 2026-04-17 00:49:51.724656 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724660 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724664 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724685 | orchestrator | 2026-04-17 00:49:51.724697 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-17 00:49:51.724702 | orchestrator | Friday 17 April 2026 00:45:55 +0000 (0:00:01.155) 0:00:28.849 ********** 2026-04-17 00:49:51.724708 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724714 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724720 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724726 | orchestrator | 2026-04-17 00:49:51.724732 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-17 00:49:51.724738 | orchestrator | Friday 17 April 2026 00:45:56 +0000 (0:00:01.342) 0:00:30.192 ********** 2026-04-17 00:49:51.724744 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.724751 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724758 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724764 | orchestrator | 2026-04-17 00:49:51.724776 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-17 00:49:51.724783 | orchestrator | Friday 17 April 2026 00:45:57 +0000 (0:00:00.254) 0:00:30.446 ********** 2026-04-17 00:49:51.724789 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.724795 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.724801 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.724807 | orchestrator | 2026-04-17 00:49:51.724813 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-17 00:49:51.724819 | orchestrator | Friday 17 April 2026 00:45:58 +0000 (0:00:00.939) 0:00:31.386 ********** 2026-04-17 00:49:51.724826 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.724832 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.724839 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.724846 | orchestrator | 2026-04-17 00:49:51.724852 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-17 00:49:51.724859 | orchestrator | Friday 17 April 2026 00:45:59 +0000 (0:00:01.886) 0:00:33.272 ********** 2026-04-17 00:49:51.724873 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:49:51.724879 | orchestrator | 2026-04-17 00:49:51.724888 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-17 00:49:51.724894 | orchestrator | Friday 17 April 2026 00:46:01 +0000 (0:00:01.166) 0:00:34.439 ********** 2026-04-17 00:49:51.724900 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.724907 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.724913 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.724919 | orchestrator | 2026-04-17 00:49:51.724925 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-17 00:49:51.724932 | orchestrator | Friday 17 April 2026 00:46:04 +0000 (0:00:02.977) 0:00:37.417 ********** 2026-04-17 00:49:51.724938 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724944 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724951 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.724957 | orchestrator | 2026-04-17 00:49:51.724963 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-17 00:49:51.724969 | orchestrator | Friday 17 April 2026 00:46:04 +0000 (0:00:00.702) 0:00:38.120 ********** 2026-04-17 00:49:51.724976 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.724982 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.724988 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.724994 | orchestrator | 2026-04-17 00:49:51.725001 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-17 00:49:51.725007 | orchestrator | Friday 17 April 2026 00:46:06 +0000 (0:00:01.215) 0:00:39.336 ********** 2026-04-17 00:49:51.725013 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725019 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725025 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725032 | orchestrator | 2026-04-17 00:49:51.725038 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-17 00:49:51.725053 | orchestrator | Friday 17 April 2026 00:46:07 +0000 (0:00:01.794) 0:00:41.130 ********** 2026-04-17 00:49:51.725059 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.725066 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725072 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725100 | orchestrator | 2026-04-17 00:49:51.725107 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-17 00:49:51.725114 | orchestrator | Friday 17 April 2026 00:46:08 +0000 (0:00:00.761) 0:00:41.892 ********** 2026-04-17 00:49:51.725120 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.725127 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725133 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725140 | orchestrator | 2026-04-17 00:49:51.725147 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-17 00:49:51.725153 | orchestrator | Friday 17 April 2026 00:46:09 +0000 (0:00:00.557) 0:00:42.449 ********** 2026-04-17 00:49:51.725160 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725166 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725172 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725178 | orchestrator | 2026-04-17 00:49:51.725183 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-17 00:49:51.725189 | orchestrator | Friday 17 April 2026 00:46:11 +0000 (0:00:02.295) 0:00:44.744 ********** 2026-04-17 00:49:51.725195 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725203 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725209 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725217 | orchestrator | 2026-04-17 00:49:51.725223 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-17 00:49:51.725229 | orchestrator | Friday 17 April 2026 00:46:14 +0000 (0:00:02.580) 0:00:47.324 ********** 2026-04-17 00:49:51.725236 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725248 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725254 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725261 | orchestrator | 2026-04-17 00:49:51.725267 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-17 00:49:51.725274 | orchestrator | Friday 17 April 2026 00:46:14 +0000 (0:00:00.715) 0:00:48.040 ********** 2026-04-17 00:49:51.725280 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 00:49:51.725288 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 00:49:51.725295 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-17 00:49:51.725301 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 00:49:51.725307 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 00:49:51.725313 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-17 00:49:51.725319 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 00:49:51.725326 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 00:49:51.725332 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-17 00:49:51.725338 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 00:49:51.725354 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 00:49:51.725361 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-17 00:49:51.725367 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725373 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725379 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725386 | orchestrator | 2026-04-17 00:49:51.725392 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-17 00:49:51.725398 | orchestrator | Friday 17 April 2026 00:46:58 +0000 (0:00:43.784) 0:01:31.824 ********** 2026-04-17 00:49:51.725404 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.725411 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725418 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725424 | orchestrator | 2026-04-17 00:49:51.725430 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-17 00:49:51.725436 | orchestrator | Friday 17 April 2026 00:46:59 +0000 (0:00:00.551) 0:01:32.376 ********** 2026-04-17 00:49:51.725444 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725451 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725458 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725465 | orchestrator | 2026-04-17 00:49:51.725471 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-17 00:49:51.725478 | orchestrator | Friday 17 April 2026 00:47:00 +0000 (0:00:01.461) 0:01:33.838 ********** 2026-04-17 00:49:51.725484 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725490 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725496 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725503 | orchestrator | 2026-04-17 00:49:51.725515 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-17 00:49:51.725524 | orchestrator | Friday 17 April 2026 00:47:01 +0000 (0:00:01.322) 0:01:35.160 ********** 2026-04-17 00:49:51.725527 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725531 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725535 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725539 | orchestrator | 2026-04-17 00:49:51.725542 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-17 00:49:51.725546 | orchestrator | Friday 17 April 2026 00:47:28 +0000 (0:00:26.711) 0:02:01.871 ********** 2026-04-17 00:49:51.725550 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725554 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725557 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725561 | orchestrator | 2026-04-17 00:49:51.725565 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-17 00:49:51.725569 | orchestrator | Friday 17 April 2026 00:47:29 +0000 (0:00:00.673) 0:02:02.544 ********** 2026-04-17 00:49:51.725572 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725576 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725580 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725584 | orchestrator | 2026-04-17 00:49:51.725588 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-17 00:49:51.725591 | orchestrator | Friday 17 April 2026 00:47:30 +0000 (0:00:01.123) 0:02:03.667 ********** 2026-04-17 00:49:51.725595 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725599 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725603 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725606 | orchestrator | 2026-04-17 00:49:51.725610 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-17 00:49:51.725614 | orchestrator | Friday 17 April 2026 00:47:30 +0000 (0:00:00.586) 0:02:04.253 ********** 2026-04-17 00:49:51.725618 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725621 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725625 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725629 | orchestrator | 2026-04-17 00:49:51.725633 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-17 00:49:51.725637 | orchestrator | Friday 17 April 2026 00:47:31 +0000 (0:00:00.578) 0:02:04.831 ********** 2026-04-17 00:49:51.725640 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725644 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725648 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725651 | orchestrator | 2026-04-17 00:49:51.725655 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-17 00:49:51.725659 | orchestrator | Friday 17 April 2026 00:47:31 +0000 (0:00:00.264) 0:02:05.096 ********** 2026-04-17 00:49:51.725663 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725667 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725670 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725674 | orchestrator | 2026-04-17 00:49:51.725678 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-17 00:49:51.725682 | orchestrator | Friday 17 April 2026 00:47:32 +0000 (0:00:00.756) 0:02:05.853 ********** 2026-04-17 00:49:51.725685 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725689 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725693 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725696 | orchestrator | 2026-04-17 00:49:51.725702 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-17 00:49:51.725708 | orchestrator | Friday 17 April 2026 00:47:33 +0000 (0:00:00.638) 0:02:06.491 ********** 2026-04-17 00:49:51.725714 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725720 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725728 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725736 | orchestrator | 2026-04-17 00:49:51.725742 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-17 00:49:51.725748 | orchestrator | Friday 17 April 2026 00:47:34 +0000 (0:00:00.877) 0:02:07.368 ********** 2026-04-17 00:49:51.725762 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:49:51.725768 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:49:51.725773 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:49:51.725779 | orchestrator | 2026-04-17 00:49:51.725785 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-17 00:49:51.725791 | orchestrator | Friday 17 April 2026 00:47:34 +0000 (0:00:00.758) 0:02:08.127 ********** 2026-04-17 00:49:51.725797 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.725803 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725808 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725814 | orchestrator | 2026-04-17 00:49:51.725824 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-17 00:49:51.725831 | orchestrator | Friday 17 April 2026 00:47:35 +0000 (0:00:00.401) 0:02:08.529 ********** 2026-04-17 00:49:51.725836 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.725842 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.725849 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.725853 | orchestrator | 2026-04-17 00:49:51.725857 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-17 00:49:51.725861 | orchestrator | Friday 17 April 2026 00:47:35 +0000 (0:00:00.309) 0:02:08.839 ********** 2026-04-17 00:49:51.725865 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725868 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725872 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725876 | orchestrator | 2026-04-17 00:49:51.725880 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-17 00:49:51.725883 | orchestrator | Friday 17 April 2026 00:47:36 +0000 (0:00:00.600) 0:02:09.439 ********** 2026-04-17 00:49:51.725887 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.725891 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.725895 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.725898 | orchestrator | 2026-04-17 00:49:51.725902 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-17 00:49:51.725906 | orchestrator | Friday 17 April 2026 00:47:36 +0000 (0:00:00.659) 0:02:10.099 ********** 2026-04-17 00:49:51.725910 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 00:49:51.725919 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 00:49:51.725923 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-17 00:49:51.725926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 00:49:51.725930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 00:49:51.725934 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-17 00:49:51.725938 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 00:49:51.725942 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 00:49:51.725946 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-17 00:49:51.725950 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-17 00:49:51.725954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 00:49:51.725957 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 00:49:51.725961 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-17 00:49:51.725965 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 00:49:51.725973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 00:49:51.725977 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 00:49:51.725981 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-17 00:49:51.725985 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 00:49:51.725989 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-17 00:49:51.725992 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-17 00:49:51.725996 | orchestrator | 2026-04-17 00:49:51.726001 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-17 00:49:51.726004 | orchestrator | 2026-04-17 00:49:51.726008 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-17 00:49:51.726047 | orchestrator | Friday 17 April 2026 00:47:40 +0000 (0:00:03.377) 0:02:13.477 ********** 2026-04-17 00:49:51.726053 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.726056 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.726060 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.726064 | orchestrator | 2026-04-17 00:49:51.726068 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-17 00:49:51.726072 | orchestrator | Friday 17 April 2026 00:47:40 +0000 (0:00:00.381) 0:02:13.859 ********** 2026-04-17 00:49:51.726076 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.726096 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.726103 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.726109 | orchestrator | 2026-04-17 00:49:51.726115 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-17 00:49:51.726121 | orchestrator | Friday 17 April 2026 00:47:41 +0000 (0:00:00.670) 0:02:14.529 ********** 2026-04-17 00:49:51.726127 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.726134 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.726140 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.726145 | orchestrator | 2026-04-17 00:49:51.726152 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-17 00:49:51.726156 | orchestrator | Friday 17 April 2026 00:47:41 +0000 (0:00:00.459) 0:02:14.989 ********** 2026-04-17 00:49:51.726163 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:49:51.726167 | orchestrator | 2026-04-17 00:49:51.726171 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-17 00:49:51.726174 | orchestrator | Friday 17 April 2026 00:47:42 +0000 (0:00:00.422) 0:02:15.411 ********** 2026-04-17 00:49:51.726178 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.726184 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.726190 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.726198 | orchestrator | 2026-04-17 00:49:51.726207 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-17 00:49:51.726212 | orchestrator | Friday 17 April 2026 00:47:42 +0000 (0:00:00.248) 0:02:15.660 ********** 2026-04-17 00:49:51.726218 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.726223 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.726229 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.726234 | orchestrator | 2026-04-17 00:49:51.726240 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-17 00:49:51.726246 | orchestrator | Friday 17 April 2026 00:47:42 +0000 (0:00:00.367) 0:02:16.027 ********** 2026-04-17 00:49:51.726251 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.726257 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.726263 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.726268 | orchestrator | 2026-04-17 00:49:51.726274 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-17 00:49:51.726286 | orchestrator | Friday 17 April 2026 00:47:42 +0000 (0:00:00.270) 0:02:16.297 ********** 2026-04-17 00:49:51.726292 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.726298 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.726304 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.726310 | orchestrator | 2026-04-17 00:49:51.726322 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-17 00:49:51.726329 | orchestrator | Friday 17 April 2026 00:47:43 +0000 (0:00:00.608) 0:02:16.906 ********** 2026-04-17 00:49:51.726336 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.726342 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.726348 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.726354 | orchestrator | 2026-04-17 00:49:51.726359 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-17 00:49:51.726363 | orchestrator | Friday 17 April 2026 00:47:44 +0000 (0:00:01.042) 0:02:17.948 ********** 2026-04-17 00:49:51.726367 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.726371 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.726375 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.726378 | orchestrator | 2026-04-17 00:49:51.726382 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-17 00:49:51.726386 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:01.464) 0:02:19.412 ********** 2026-04-17 00:49:51.726390 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:49:51.726393 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:49:51.726397 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:49:51.726401 | orchestrator | 2026-04-17 00:49:51.726405 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-17 00:49:51.726408 | orchestrator | 2026-04-17 00:49:51.726412 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-17 00:49:51.726416 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:09.503) 0:02:28.916 ********** 2026-04-17 00:49:51.726419 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726423 | orchestrator | 2026-04-17 00:49:51.726427 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-17 00:49:51.726431 | orchestrator | Friday 17 April 2026 00:47:56 +0000 (0:00:00.795) 0:02:29.711 ********** 2026-04-17 00:49:51.726434 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726438 | orchestrator | 2026-04-17 00:49:51.726442 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 00:49:51.726446 | orchestrator | Friday 17 April 2026 00:47:56 +0000 (0:00:00.365) 0:02:30.077 ********** 2026-04-17 00:49:51.726450 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 00:49:51.726453 | orchestrator | 2026-04-17 00:49:51.726457 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 00:49:51.726461 | orchestrator | Friday 17 April 2026 00:47:57 +0000 (0:00:00.487) 0:02:30.565 ********** 2026-04-17 00:49:51.726465 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726468 | orchestrator | 2026-04-17 00:49:51.726472 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-17 00:49:51.726476 | orchestrator | Friday 17 April 2026 00:47:58 +0000 (0:00:00.864) 0:02:31.430 ********** 2026-04-17 00:49:51.726479 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726483 | orchestrator | 2026-04-17 00:49:51.726487 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-17 00:49:51.726491 | orchestrator | Friday 17 April 2026 00:47:58 +0000 (0:00:00.517) 0:02:31.947 ********** 2026-04-17 00:49:51.726494 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 00:49:51.726501 | orchestrator | 2026-04-17 00:49:51.726507 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-17 00:49:51.726513 | orchestrator | Friday 17 April 2026 00:48:00 +0000 (0:00:01.436) 0:02:33.384 ********** 2026-04-17 00:49:51.726519 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 00:49:51.726525 | orchestrator | 2026-04-17 00:49:51.726535 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-17 00:49:51.726542 | orchestrator | Friday 17 April 2026 00:48:00 +0000 (0:00:00.913) 0:02:34.297 ********** 2026-04-17 00:49:51.726548 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726554 | orchestrator | 2026-04-17 00:49:51.726561 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-17 00:49:51.726567 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.362) 0:02:34.660 ********** 2026-04-17 00:49:51.726573 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726579 | orchestrator | 2026-04-17 00:49:51.726585 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-17 00:49:51.726591 | orchestrator | 2026-04-17 00:49:51.726597 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-17 00:49:51.726604 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.405) 0:02:35.066 ********** 2026-04-17 00:49:51.726608 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726612 | orchestrator | 2026-04-17 00:49:51.726616 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-17 00:49:51.726619 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.123) 0:02:35.189 ********** 2026-04-17 00:49:51.726623 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:49:51.726627 | orchestrator | 2026-04-17 00:49:51.726631 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-17 00:49:51.726635 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:00.207) 0:02:35.397 ********** 2026-04-17 00:49:51.726638 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726642 | orchestrator | 2026-04-17 00:49:51.726646 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-17 00:49:51.726649 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:01.068) 0:02:36.466 ********** 2026-04-17 00:49:51.726653 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726658 | orchestrator | 2026-04-17 00:49:51.726664 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-17 00:49:51.726669 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:01.647) 0:02:38.114 ********** 2026-04-17 00:49:51.726679 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726686 | orchestrator | 2026-04-17 00:49:51.726692 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-17 00:49:51.726698 | orchestrator | Friday 17 April 2026 00:48:05 +0000 (0:00:01.059) 0:02:39.173 ********** 2026-04-17 00:49:51.726704 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726709 | orchestrator | 2026-04-17 00:49:51.726721 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-17 00:49:51.726728 | orchestrator | Friday 17 April 2026 00:48:06 +0000 (0:00:00.561) 0:02:39.734 ********** 2026-04-17 00:49:51.726734 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726740 | orchestrator | 2026-04-17 00:49:51.726746 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-17 00:49:51.726752 | orchestrator | Friday 17 April 2026 00:48:13 +0000 (0:00:06.597) 0:02:46.332 ********** 2026-04-17 00:49:51.726759 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.726763 | orchestrator | 2026-04-17 00:49:51.726766 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-17 00:49:51.726770 | orchestrator | Friday 17 April 2026 00:48:24 +0000 (0:00:11.896) 0:02:58.229 ********** 2026-04-17 00:49:51.726774 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.726777 | orchestrator | 2026-04-17 00:49:51.726781 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-17 00:49:51.726785 | orchestrator | 2026-04-17 00:49:51.726789 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-17 00:49:51.726793 | orchestrator | Friday 17 April 2026 00:48:25 +0000 (0:00:00.974) 0:02:59.204 ********** 2026-04-17 00:49:51.726796 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.726805 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.726809 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.726813 | orchestrator | 2026-04-17 00:49:51.726817 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-17 00:49:51.726820 | orchestrator | Friday 17 April 2026 00:48:26 +0000 (0:00:00.575) 0:02:59.779 ********** 2026-04-17 00:49:51.726824 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726828 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.726831 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.726835 | orchestrator | 2026-04-17 00:49:51.726839 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-17 00:49:51.726843 | orchestrator | Friday 17 April 2026 00:48:26 +0000 (0:00:00.398) 0:03:00.178 ********** 2026-04-17 00:49:51.726846 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-04-17 00:49:51.726850 | orchestrator | 2026-04-17 00:49:51.726854 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-17 00:49:51.726858 | orchestrator | Friday 17 April 2026 00:48:27 +0000 (0:00:00.997) 0:03:01.176 ********** 2026-04-17 00:49:51.726861 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.726866 | orchestrator | 2026-04-17 00:49:51.726870 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-17 00:49:51.726873 | orchestrator | Friday 17 April 2026 00:48:29 +0000 (0:00:01.319) 0:03:02.496 ********** 2026-04-17 00:49:51.726877 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.726881 | orchestrator | 2026-04-17 00:49:51.726885 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-17 00:49:51.726888 | orchestrator | Friday 17 April 2026 00:48:30 +0000 (0:00:01.068) 0:03:03.565 ********** 2026-04-17 00:49:51.726892 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726896 | orchestrator | 2026-04-17 00:49:51.726900 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-17 00:49:51.726903 | orchestrator | Friday 17 April 2026 00:48:30 +0000 (0:00:00.469) 0:03:04.034 ********** 2026-04-17 00:49:51.726907 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.726911 | orchestrator | 2026-04-17 00:49:51.726914 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-17 00:49:51.726918 | orchestrator | Friday 17 April 2026 00:48:31 +0000 (0:00:01.213) 0:03:05.248 ********** 2026-04-17 00:49:51.726922 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726926 | orchestrator | 2026-04-17 00:49:51.726929 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-17 00:49:51.726933 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:00.139) 0:03:05.387 ********** 2026-04-17 00:49:51.726937 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726940 | orchestrator | 2026-04-17 00:49:51.726944 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-17 00:49:51.726948 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:00.128) 0:03:05.515 ********** 2026-04-17 00:49:51.726952 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726955 | orchestrator | 2026-04-17 00:49:51.726962 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-17 00:49:51.726966 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:00.132) 0:03:05.649 ********** 2026-04-17 00:49:51.726970 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.726973 | orchestrator | 2026-04-17 00:49:51.726977 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-17 00:49:51.726981 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:00.129) 0:03:05.778 ********** 2026-04-17 00:49:51.726985 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.726989 | orchestrator | 2026-04-17 00:49:51.726992 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-17 00:49:51.726996 | orchestrator | Friday 17 April 2026 00:48:38 +0000 (0:00:05.958) 0:03:11.737 ********** 2026-04-17 00:49:51.727003 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-17 00:49:51.727007 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-17 00:49:51.727011 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-17 00:49:51.727014 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-17 00:49:51.727018 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-17 00:49:51.727022 | orchestrator | 2026-04-17 00:49:51.727025 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-17 00:49:51.727029 | orchestrator | Friday 17 April 2026 00:49:20 +0000 (0:00:42.540) 0:03:54.277 ********** 2026-04-17 00:49:51.727036 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.727040 | orchestrator | 2026-04-17 00:49:51.727044 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-17 00:49:51.727048 | orchestrator | Friday 17 April 2026 00:49:22 +0000 (0:00:01.262) 0:03:55.540 ********** 2026-04-17 00:49:51.727051 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.727055 | orchestrator | 2026-04-17 00:49:51.727059 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-17 00:49:51.727063 | orchestrator | Friday 17 April 2026 00:49:24 +0000 (0:00:01.802) 0:03:57.343 ********** 2026-04-17 00:49:51.727067 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 00:49:51.727070 | orchestrator | 2026-04-17 00:49:51.727074 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-17 00:49:51.727115 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:01.089) 0:03:58.432 ********** 2026-04-17 00:49:51.727119 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.727123 | orchestrator | 2026-04-17 00:49:51.727127 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-17 00:49:51.727130 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:00.141) 0:03:58.573 ********** 2026-04-17 00:49:51.727134 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-17 00:49:51.727138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-17 00:49:51.727141 | orchestrator | 2026-04-17 00:49:51.727145 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-17 00:49:51.727149 | orchestrator | Friday 17 April 2026 00:49:27 +0000 (0:00:01.963) 0:04:00.537 ********** 2026-04-17 00:49:51.727153 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.727156 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.727160 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.727164 | orchestrator | 2026-04-17 00:49:51.727168 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-17 00:49:51.727172 | orchestrator | Friday 17 April 2026 00:49:27 +0000 (0:00:00.262) 0:04:00.799 ********** 2026-04-17 00:49:51.727175 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.727179 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.727183 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.727187 | orchestrator | 2026-04-17 00:49:51.727190 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-17 00:49:51.727194 | orchestrator | 2026-04-17 00:49:51.727198 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-17 00:49:51.727202 | orchestrator | Friday 17 April 2026 00:49:28 +0000 (0:00:00.859) 0:04:01.658 ********** 2026-04-17 00:49:51.727205 | orchestrator | ok: [testbed-manager] 2026-04-17 00:49:51.727209 | orchestrator | 2026-04-17 00:49:51.727213 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-17 00:49:51.727217 | orchestrator | Friday 17 April 2026 00:49:28 +0000 (0:00:00.124) 0:04:01.783 ********** 2026-04-17 00:49:51.727220 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-17 00:49:51.727228 | orchestrator | 2026-04-17 00:49:51.727232 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-17 00:49:51.727236 | orchestrator | Friday 17 April 2026 00:49:28 +0000 (0:00:00.341) 0:04:02.125 ********** 2026-04-17 00:49:51.727239 | orchestrator | changed: [testbed-manager] 2026-04-17 00:49:51.727243 | orchestrator | 2026-04-17 00:49:51.727247 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-17 00:49:51.727250 | orchestrator | 2026-04-17 00:49:51.727254 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-17 00:49:51.727258 | orchestrator | Friday 17 April 2026 00:49:34 +0000 (0:00:05.693) 0:04:07.818 ********** 2026-04-17 00:49:51.727261 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:49:51.727265 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:49:51.727269 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:49:51.727273 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:49:51.727276 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:49:51.727280 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:49:51.727284 | orchestrator | 2026-04-17 00:49:51.727287 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-17 00:49:51.727291 | orchestrator | Friday 17 April 2026 00:49:35 +0000 (0:00:00.643) 0:04:08.462 ********** 2026-04-17 00:49:51.727298 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 00:49:51.727302 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 00:49:51.727306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 00:49:51.727309 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-17 00:49:51.727313 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 00:49:51.727317 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 00:49:51.727321 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-17 00:49:51.727325 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 00:49:51.727328 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 00:49:51.727332 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 00:49:51.727336 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 00:49:51.727339 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-17 00:49:51.727347 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 00:49:51.727351 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-17 00:49:51.727354 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 00:49:51.727358 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 00:49:51.727362 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-17 00:49:51.727366 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-17 00:49:51.727369 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 00:49:51.727373 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 00:49:51.727377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 00:49:51.727381 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-17 00:49:51.727384 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 00:49:51.727391 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 00:49:51.727395 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-17 00:49:51.727399 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 00:49:51.727403 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 00:49:51.727407 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-17 00:49:51.727410 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 00:49:51.727414 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-17 00:49:51.727418 | orchestrator | 2026-04-17 00:49:51.727422 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-17 00:49:51.727425 | orchestrator | Friday 17 April 2026 00:49:47 +0000 (0:00:12.321) 0:04:20.783 ********** 2026-04-17 00:49:51.727429 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.727433 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.727437 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.727440 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.727444 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.727448 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.727452 | orchestrator | 2026-04-17 00:49:51.727455 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-17 00:49:51.727459 | orchestrator | Friday 17 April 2026 00:49:47 +0000 (0:00:00.439) 0:04:21.223 ********** 2026-04-17 00:49:51.727463 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:49:51.727467 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:49:51.727471 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:49:51.727474 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:49:51.727478 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:49:51.727482 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:49:51.727486 | orchestrator | 2026-04-17 00:49:51.727489 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:49:51.727493 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:49:51.727498 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-17 00:49:51.727503 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 00:49:51.727511 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 00:49:51.727515 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 00:49:51.727519 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 00:49:51.727523 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 00:49:51.727526 | orchestrator | 2026-04-17 00:49:51.727530 | orchestrator | 2026-04-17 00:49:51.727534 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:49:51.727538 | orchestrator | Friday 17 April 2026 00:49:48 +0000 (0:00:00.589) 0:04:21.812 ********** 2026-04-17 00:49:51.727541 | orchestrator | =============================================================================== 2026-04-17 00:49:51.727545 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.78s 2026-04-17 00:49:51.727552 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.54s 2026-04-17 00:49:51.727556 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.71s 2026-04-17 00:49:51.727562 | orchestrator | Manage labels ---------------------------------------------------------- 12.32s 2026-04-17 00:49:51.727566 | orchestrator | kubectl : Install required packages ------------------------------------ 11.90s 2026-04-17 00:49:51.727570 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.50s 2026-04-17 00:49:51.727574 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.60s 2026-04-17 00:49:51.727578 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.28s 2026-04-17 00:49:51.727581 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.96s 2026-04-17 00:49:51.727585 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.69s 2026-04-17 00:49:51.727589 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.38s 2026-04-17 00:49:51.727593 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.98s 2026-04-17 00:49:51.727596 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.58s 2026-04-17 00:49:51.727600 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.30s 2026-04-17 00:49:51.727604 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2026-04-17 00:49:51.727608 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.96s 2026-04-17 00:49:51.727611 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.89s 2026-04-17 00:49:51.727615 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.82s 2026-04-17 00:49:51.727619 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.80s 2026-04-17 00:49:51.727623 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.79s 2026-04-17 00:49:51.727627 | orchestrator | 2026-04-17 00:49:51 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:51.727631 | orchestrator | 2026-04-17 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:54.757236 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:54.758750 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:54.759975 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:54.760939 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task a45b85e8-d0ed-477b-808e-e6dafc71ed02 is in state SUCCESS 2026-04-17 00:49:54.764715 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task 6c2a7d0c-03cb-42ed-99d6-446310c80a8d is in state STARTED 2026-04-17 00:49:54.766328 | orchestrator | 2026-04-17 00:49:54 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:54.766369 | orchestrator | 2026-04-17 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:49:57.801557 | orchestrator | 2026-04-17 00:49:57 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:49:57.802552 | orchestrator | 2026-04-17 00:49:57 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:49:57.802602 | orchestrator | 2026-04-17 00:49:57 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:49:57.803191 | orchestrator | 2026-04-17 00:49:57 | INFO  | Task 6c2a7d0c-03cb-42ed-99d6-446310c80a8d is in state STARTED 2026-04-17 00:49:57.804166 | orchestrator | 2026-04-17 00:49:57 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:49:57.804203 | orchestrator | 2026-04-17 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:00.840220 | orchestrator | 2026-04-17 00:50:00 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:00.840320 | orchestrator | 2026-04-17 00:50:00 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:00.842783 | orchestrator | 2026-04-17 00:50:00 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:00.842935 | orchestrator | 2026-04-17 00:50:00 | INFO  | Task 6c2a7d0c-03cb-42ed-99d6-446310c80a8d is in state SUCCESS 2026-04-17 00:50:00.845288 | orchestrator | 2026-04-17 00:50:00 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:00.845358 | orchestrator | 2026-04-17 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:03.889429 | orchestrator | 2026-04-17 00:50:03 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:03.891140 | orchestrator | 2026-04-17 00:50:03 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:03.893128 | orchestrator | 2026-04-17 00:50:03 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:03.900116 | orchestrator | 2026-04-17 00:50:03 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:03.900163 | orchestrator | 2026-04-17 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:06.945230 | orchestrator | 2026-04-17 00:50:06 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:06.948310 | orchestrator | 2026-04-17 00:50:06 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:06.951209 | orchestrator | 2026-04-17 00:50:06 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:06.954774 | orchestrator | 2026-04-17 00:50:06 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:06.954825 | orchestrator | 2026-04-17 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:10.023506 | orchestrator | 2026-04-17 00:50:10 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:10.026735 | orchestrator | 2026-04-17 00:50:10 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:10.029182 | orchestrator | 2026-04-17 00:50:10 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:10.030745 | orchestrator | 2026-04-17 00:50:10 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:10.031008 | orchestrator | 2026-04-17 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:13.073167 | orchestrator | 2026-04-17 00:50:13 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:13.074166 | orchestrator | 2026-04-17 00:50:13 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:13.075270 | orchestrator | 2026-04-17 00:50:13 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:13.076601 | orchestrator | 2026-04-17 00:50:13 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:13.076626 | orchestrator | 2026-04-17 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:16.116010 | orchestrator | 2026-04-17 00:50:16 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:16.118841 | orchestrator | 2026-04-17 00:50:16 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:16.120191 | orchestrator | 2026-04-17 00:50:16 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state STARTED 2026-04-17 00:50:16.122253 | orchestrator | 2026-04-17 00:50:16 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:16.122298 | orchestrator | 2026-04-17 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:19.171984 | orchestrator | 2026-04-17 00:50:19 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:19.173026 | orchestrator | 2026-04-17 00:50:19 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:19.174246 | orchestrator | 2026-04-17 00:50:19 | INFO  | Task a6e95f2d-dc15-4166-af82-d9ab12fd5489 is in state SUCCESS 2026-04-17 00:50:19.176451 | orchestrator | 2026-04-17 00:50:19.176495 | orchestrator | 2026-04-17 00:50:19.176504 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-17 00:50:19.176511 | orchestrator | 2026-04-17 00:50:19.176518 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 00:50:19.176526 | orchestrator | Friday 17 April 2026 00:49:51 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-04-17 00:50:19.176533 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 00:50:19.176540 | orchestrator | 2026-04-17 00:50:19.176546 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 00:50:19.176552 | orchestrator | Friday 17 April 2026 00:49:52 +0000 (0:00:01.070) 0:00:01.337 ********** 2026-04-17 00:50:19.176559 | orchestrator | changed: [testbed-manager] 2026-04-17 00:50:19.176566 | orchestrator | 2026-04-17 00:50:19.176572 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-17 00:50:19.176579 | orchestrator | Friday 17 April 2026 00:49:54 +0000 (0:00:01.203) 0:00:02.540 ********** 2026-04-17 00:50:19.176585 | orchestrator | changed: [testbed-manager] 2026-04-17 00:50:19.176591 | orchestrator | 2026-04-17 00:50:19.176597 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:50:19.176604 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:50:19.176611 | orchestrator | 2026-04-17 00:50:19.176617 | orchestrator | 2026-04-17 00:50:19.176624 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:50:19.176630 | orchestrator | Friday 17 April 2026 00:49:54 +0000 (0:00:00.412) 0:00:02.953 ********** 2026-04-17 00:50:19.176637 | orchestrator | =============================================================================== 2026-04-17 00:50:19.176642 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2026-04-17 00:50:19.176649 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2026-04-17 00:50:19.176655 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-04-17 00:50:19.176662 | orchestrator | 2026-04-17 00:50:19.176668 | orchestrator | 2026-04-17 00:50:19.176674 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-17 00:50:19.176681 | orchestrator | 2026-04-17 00:50:19.176687 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-17 00:50:19.176693 | orchestrator | Friday 17 April 2026 00:49:52 +0000 (0:00:00.368) 0:00:00.368 ********** 2026-04-17 00:50:19.176699 | orchestrator | ok: [testbed-manager] 2026-04-17 00:50:19.176707 | orchestrator | 2026-04-17 00:50:19.176713 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-17 00:50:19.176719 | orchestrator | Friday 17 April 2026 00:49:52 +0000 (0:00:00.721) 0:00:01.089 ********** 2026-04-17 00:50:19.176726 | orchestrator | ok: [testbed-manager] 2026-04-17 00:50:19.176751 | orchestrator | 2026-04-17 00:50:19.176758 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-17 00:50:19.176764 | orchestrator | Friday 17 April 2026 00:49:53 +0000 (0:00:00.559) 0:00:01.649 ********** 2026-04-17 00:50:19.176771 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-17 00:50:19.176777 | orchestrator | 2026-04-17 00:50:19.176783 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-17 00:50:19.176789 | orchestrator | Friday 17 April 2026 00:49:54 +0000 (0:00:00.948) 0:00:02.597 ********** 2026-04-17 00:50:19.176795 | orchestrator | changed: [testbed-manager] 2026-04-17 00:50:19.176802 | orchestrator | 2026-04-17 00:50:19.176809 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-17 00:50:19.176815 | orchestrator | Friday 17 April 2026 00:49:55 +0000 (0:00:00.977) 0:00:03.575 ********** 2026-04-17 00:50:19.176821 | orchestrator | changed: [testbed-manager] 2026-04-17 00:50:19.176827 | orchestrator | 2026-04-17 00:50:19.176833 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-17 00:50:19.176839 | orchestrator | Friday 17 April 2026 00:49:55 +0000 (0:00:00.488) 0:00:04.063 ********** 2026-04-17 00:50:19.176846 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 00:50:19.176853 | orchestrator | 2026-04-17 00:50:19.176859 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-17 00:50:19.176865 | orchestrator | Friday 17 April 2026 00:49:57 +0000 (0:00:01.535) 0:00:05.600 ********** 2026-04-17 00:50:19.176872 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 00:50:19.176878 | orchestrator | 2026-04-17 00:50:19.176884 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-17 00:50:19.176890 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.783) 0:00:06.383 ********** 2026-04-17 00:50:19.176897 | orchestrator | ok: [testbed-manager] 2026-04-17 00:50:19.176903 | orchestrator | 2026-04-17 00:50:19.176909 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-17 00:50:19.176915 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.394) 0:00:06.778 ********** 2026-04-17 00:50:19.176920 | orchestrator | ok: [testbed-manager] 2026-04-17 00:50:19.176925 | orchestrator | 2026-04-17 00:50:19.176931 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:50:19.176937 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:50:19.176943 | orchestrator | 2026-04-17 00:50:19.176949 | orchestrator | 2026-04-17 00:50:19.176955 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:50:19.176960 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.273) 0:00:07.051 ********** 2026-04-17 00:50:19.176966 | orchestrator | =============================================================================== 2026-04-17 00:50:19.176972 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.54s 2026-04-17 00:50:19.176978 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.98s 2026-04-17 00:50:19.176991 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.95s 2026-04-17 00:50:19.177008 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-04-17 00:50:19.177014 | orchestrator | Get home directory of operator user ------------------------------------- 0.72s 2026-04-17 00:50:19.177020 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2026-04-17 00:50:19.177026 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-04-17 00:50:19.177033 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2026-04-17 00:50:19.177039 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-04-17 00:50:19.177045 | orchestrator | 2026-04-17 00:50:19.177050 | orchestrator | 2026-04-17 00:50:19.177056 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-17 00:50:19.177090 | orchestrator | 2026-04-17 00:50:19.177096 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-17 00:50:19.177102 | orchestrator | Friday 17 April 2026 00:47:58 +0000 (0:00:00.096) 0:00:00.096 ********** 2026-04-17 00:50:19.177108 | orchestrator | ok: [localhost] => { 2026-04-17 00:50:19.177116 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-17 00:50:19.177123 | orchestrator | } 2026-04-17 00:50:19.177130 | orchestrator | 2026-04-17 00:50:19.177136 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-17 00:50:19.177142 | orchestrator | Friday 17 April 2026 00:47:58 +0000 (0:00:00.026) 0:00:00.122 ********** 2026-04-17 00:50:19.177150 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-17 00:50:19.177158 | orchestrator | ...ignoring 2026-04-17 00:50:19.177164 | orchestrator | 2026-04-17 00:50:19.177171 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-17 00:50:19.177177 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:02.904) 0:00:03.026 ********** 2026-04-17 00:50:19.177183 | orchestrator | skipping: [localhost] 2026-04-17 00:50:19.177190 | orchestrator | 2026-04-17 00:50:19.177196 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-17 00:50:19.177202 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.034) 0:00:03.060 ********** 2026-04-17 00:50:19.177209 | orchestrator | ok: [localhost] 2026-04-17 00:50:19.177215 | orchestrator | 2026-04-17 00:50:19.177221 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:50:19.177228 | orchestrator | 2026-04-17 00:50:19.177234 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:50:19.177240 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.153) 0:00:03.214 ********** 2026-04-17 00:50:19.177247 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:19.177253 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:50:19.177259 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:50:19.177266 | orchestrator | 2026-04-17 00:50:19.177272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:50:19.177278 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:00.283) 0:00:03.498 ********** 2026-04-17 00:50:19.177285 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-17 00:50:19.177292 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-17 00:50:19.177298 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-17 00:50:19.177305 | orchestrator | 2026-04-17 00:50:19.177311 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-17 00:50:19.177317 | orchestrator | 2026-04-17 00:50:19.177323 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 00:50:19.177330 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:01.032) 0:00:04.531 ********** 2026-04-17 00:50:19.177336 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:50:19.177343 | orchestrator | 2026-04-17 00:50:19.177349 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 00:50:19.177356 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:01.096) 0:00:05.627 ********** 2026-04-17 00:50:19.177362 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:19.177368 | orchestrator | 2026-04-17 00:50:19.177375 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-17 00:50:19.177381 | orchestrator | Friday 17 April 2026 00:48:07 +0000 (0:00:02.769) 0:00:08.397 ********** 2026-04-17 00:50:19.177387 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177394 | orchestrator | 2026-04-17 00:50:19.177400 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-17 00:50:19.177407 | orchestrator | Friday 17 April 2026 00:48:07 +0000 (0:00:00.309) 0:00:08.706 ********** 2026-04-17 00:50:19.177419 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177426 | orchestrator | 2026-04-17 00:50:19.177432 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-17 00:50:19.177439 | orchestrator | Friday 17 April 2026 00:48:07 +0000 (0:00:00.390) 0:00:09.097 ********** 2026-04-17 00:50:19.177445 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177451 | orchestrator | 2026-04-17 00:50:19.177457 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-17 00:50:19.177464 | orchestrator | Friday 17 April 2026 00:48:08 +0000 (0:00:00.275) 0:00:09.372 ********** 2026-04-17 00:50:19.177470 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177476 | orchestrator | 2026-04-17 00:50:19.177482 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 00:50:19.177488 | orchestrator | Friday 17 April 2026 00:48:08 +0000 (0:00:00.395) 0:00:09.767 ********** 2026-04-17 00:50:19.177494 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:50:19.177501 | orchestrator | 2026-04-17 00:50:19.177507 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-17 00:50:19.177528 | orchestrator | Friday 17 April 2026 00:48:09 +0000 (0:00:00.757) 0:00:10.525 ********** 2026-04-17 00:50:19.177535 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:19.177548 | orchestrator | 2026-04-17 00:50:19.177555 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-17 00:50:19.177561 | orchestrator | Friday 17 April 2026 00:48:10 +0000 (0:00:00.912) 0:00:11.437 ********** 2026-04-17 00:50:19.177567 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177574 | orchestrator | 2026-04-17 00:50:19.177580 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-17 00:50:19.177586 | orchestrator | Friday 17 April 2026 00:48:10 +0000 (0:00:00.671) 0:00:12.108 ********** 2026-04-17 00:50:19.177592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.177598 | orchestrator | 2026-04-17 00:50:19.177604 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-17 00:50:19.177610 | orchestrator | Friday 17 April 2026 00:48:11 +0000 (0:00:00.456) 0:00:12.565 ********** 2026-04-17 00:50:19.178001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178118 | orchestrator | 2026-04-17 00:50:19.178126 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-17 00:50:19.178130 | orchestrator | Friday 17 April 2026 00:48:12 +0000 (0:00:01.719) 0:00:14.284 ********** 2026-04-17 00:50:19.178144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178161 | orchestrator | 2026-04-17 00:50:19.178165 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-17 00:50:19.178168 | orchestrator | Friday 17 April 2026 00:48:14 +0000 (0:00:01.601) 0:00:15.886 ********** 2026-04-17 00:50:19.178172 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 00:50:19.178177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 00:50:19.178181 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-17 00:50:19.178184 | orchestrator | 2026-04-17 00:50:19.178188 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-17 00:50:19.178192 | orchestrator | Friday 17 April 2026 00:48:16 +0000 (0:00:02.235) 0:00:18.121 ********** 2026-04-17 00:50:19.178196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 00:50:19.178203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 00:50:19.178207 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-17 00:50:19.178211 | orchestrator | 2026-04-17 00:50:19.178214 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-17 00:50:19.178221 | orchestrator | Friday 17 April 2026 00:48:20 +0000 (0:00:03.959) 0:00:22.081 ********** 2026-04-17 00:50:19.178226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 00:50:19.178229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 00:50:19.178233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-17 00:50:19.178237 | orchestrator | 2026-04-17 00:50:19.178241 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-17 00:50:19.178244 | orchestrator | Friday 17 April 2026 00:48:22 +0000 (0:00:01.701) 0:00:23.782 ********** 2026-04-17 00:50:19.178248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 00:50:19.178252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 00:50:19.178256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-17 00:50:19.178260 | orchestrator | 2026-04-17 00:50:19.178263 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-17 00:50:19.178267 | orchestrator | Friday 17 April 2026 00:48:24 +0000 (0:00:01.785) 0:00:25.568 ********** 2026-04-17 00:50:19.178271 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 00:50:19.178275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 00:50:19.178279 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-17 00:50:19.178286 | orchestrator | 2026-04-17 00:50:19.178289 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-17 00:50:19.178293 | orchestrator | Friday 17 April 2026 00:48:26 +0000 (0:00:02.223) 0:00:27.791 ********** 2026-04-17 00:50:19.178297 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 00:50:19.178301 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 00:50:19.178304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-17 00:50:19.178308 | orchestrator | 2026-04-17 00:50:19.178312 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-17 00:50:19.178316 | orchestrator | Friday 17 April 2026 00:48:28 +0000 (0:00:02.452) 0:00:30.244 ********** 2026-04-17 00:50:19.178319 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.178323 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:50:19.178327 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:50:19.178331 | orchestrator | 2026-04-17 00:50:19.178335 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-17 00:50:19.178338 | orchestrator | Friday 17 April 2026 00:48:29 +0000 (0:00:00.461) 0:00:30.705 ********** 2026-04-17 00:50:19.178342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:50:19.178365 | orchestrator | 2026-04-17 00:50:19.178369 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-17 00:50:19.178372 | orchestrator | Friday 17 April 2026 00:48:31 +0000 (0:00:02.098) 0:00:32.804 ********** 2026-04-17 00:50:19.178376 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:19.178380 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:19.178384 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:19.178388 | orchestrator | 2026-04-17 00:50:19.178391 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-17 00:50:19.178395 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:01.050) 0:00:33.854 ********** 2026-04-17 00:50:19.178399 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:19.178403 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:19.178407 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:19.178410 | orchestrator | 2026-04-17 00:50:19.178414 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-17 00:50:19.178418 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:10.038) 0:00:43.892 ********** 2026-04-17 00:50:19.178422 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:19.178426 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:19.178430 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:19.178433 | orchestrator | 2026-04-17 00:50:19.178437 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 00:50:19.178441 | orchestrator | 2026-04-17 00:50:19.178445 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 00:50:19.178448 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:00.327) 0:00:44.220 ********** 2026-04-17 00:50:19.178452 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:19.178456 | orchestrator | 2026-04-17 00:50:19.178460 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 00:50:19.178464 | orchestrator | Friday 17 April 2026 00:48:43 +0000 (0:00:00.567) 0:00:44.787 ********** 2026-04-17 00:50:19.178468 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:19.178471 | orchestrator | 2026-04-17 00:50:19.178475 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 00:50:19.178479 | orchestrator | Friday 17 April 2026 00:48:43 +0000 (0:00:00.274) 0:00:45.061 ********** 2026-04-17 00:50:19.178483 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:19.178487 | orchestrator | 2026-04-17 00:50:19.178491 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 00:50:19.178494 | orchestrator | Friday 17 April 2026 00:48:45 +0000 (0:00:01.976) 0:00:47.038 ********** 2026-04-17 00:50:19.178498 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:19.178502 | orchestrator | 2026-04-17 00:50:19.178506 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 00:50:19.178510 | orchestrator | 2026-04-17 00:50:19.178513 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 00:50:19.178517 | orchestrator | Friday 17 April 2026 00:49:38 +0000 (0:00:53.160) 0:01:40.199 ********** 2026-04-17 00:50:19.178521 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:50:19.178525 | orchestrator | 2026-04-17 00:50:19.178528 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 00:50:19.178532 | orchestrator | Friday 17 April 2026 00:49:39 +0000 (0:00:00.607) 0:01:40.807 ********** 2026-04-17 00:50:19.178536 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:50:19.178540 | orchestrator | 2026-04-17 00:50:19.178544 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 00:50:19.178551 | orchestrator | Friday 17 April 2026 00:49:40 +0000 (0:00:00.515) 0:01:41.322 ********** 2026-04-17 00:50:19.178555 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:19.178558 | orchestrator | 2026-04-17 00:50:19.178562 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 00:50:19.178566 | orchestrator | Friday 17 April 2026 00:49:47 +0000 (0:00:06.968) 0:01:48.291 ********** 2026-04-17 00:50:19.178570 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:19.178574 | orchestrator | 2026-04-17 00:50:19.178580 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-17 00:50:19.178584 | orchestrator | 2026-04-17 00:50:19.178588 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-17 00:50:19.178592 | orchestrator | Friday 17 April 2026 00:49:57 +0000 (0:00:10.871) 0:01:59.162 ********** 2026-04-17 00:50:19.178596 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:50:19.178600 | orchestrator | 2026-04-17 00:50:19.178606 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-17 00:50:19.178610 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.870) 0:02:00.033 ********** 2026-04-17 00:50:19.178614 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:50:19.178617 | orchestrator | 2026-04-17 00:50:19.178621 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-17 00:50:19.178625 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.197) 0:02:00.231 ********** 2026-04-17 00:50:19.178629 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:19.178632 | orchestrator | 2026-04-17 00:50:19.178636 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-17 00:50:19.178640 | orchestrator | Friday 17 April 2026 00:50:00 +0000 (0:00:01.899) 0:02:02.130 ********** 2026-04-17 00:50:19.178644 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:19.178648 | orchestrator | 2026-04-17 00:50:19.178651 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-17 00:50:19.178655 | orchestrator | 2026-04-17 00:50:19.178659 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-17 00:50:19.178663 | orchestrator | Friday 17 April 2026 00:50:14 +0000 (0:00:13.555) 0:02:15.686 ********** 2026-04-17 00:50:19.178667 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:50:19.178670 | orchestrator | 2026-04-17 00:50:19.178674 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-17 00:50:19.178678 | orchestrator | Friday 17 April 2026 00:50:15 +0000 (0:00:00.613) 0:02:16.300 ********** 2026-04-17 00:50:19.178682 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:19.178685 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:50:19.178689 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:50:19.178693 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 00:50:19.178697 | orchestrator | enable_outward_rabbitmq_True 2026-04-17 00:50:19.178701 | orchestrator | 2026-04-17 00:50:19.178704 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-17 00:50:19.178708 | orchestrator | skipping: no hosts matched 2026-04-17 00:50:19.178712 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 00:50:19.178716 | orchestrator | outward_rabbitmq_restart 2026-04-17 00:50:19.178720 | orchestrator | 2026-04-17 00:50:19.178724 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-17 00:50:19.178727 | orchestrator | skipping: no hosts matched 2026-04-17 00:50:19.178731 | orchestrator | 2026-04-17 00:50:19.178735 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-17 00:50:19.178739 | orchestrator | skipping: no hosts matched 2026-04-17 00:50:19.178743 | orchestrator | 2026-04-17 00:50:19.178746 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:50:19.178751 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-17 00:50:19.178760 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 00:50:19.178764 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:50:19.178768 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:50:19.178772 | orchestrator | 2026-04-17 00:50:19.178776 | orchestrator | 2026-04-17 00:50:19.178780 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:50:19.178783 | orchestrator | Friday 17 April 2026 00:50:17 +0000 (0:00:02.512) 0:02:18.813 ********** 2026-04-17 00:50:19.178787 | orchestrator | =============================================================================== 2026-04-17 00:50:19.178791 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.59s 2026-04-17 00:50:19.178795 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.85s 2026-04-17 00:50:19.178798 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 10.04s 2026-04-17 00:50:19.178803 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.96s 2026-04-17 00:50:19.178806 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.90s 2026-04-17 00:50:19.178810 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.77s 2026-04-17 00:50:19.178814 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.51s 2026-04-17 00:50:19.178818 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.45s 2026-04-17 00:50:19.178821 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.24s 2026-04-17 00:50:19.178825 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.22s 2026-04-17 00:50:19.178829 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.10s 2026-04-17 00:50:19.178833 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.05s 2026-04-17 00:50:19.178836 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.79s 2026-04-17 00:50:19.178843 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.72s 2026-04-17 00:50:19.178847 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.70s 2026-04-17 00:50:19.178851 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.60s 2026-04-17 00:50:19.178855 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.10s 2026-04-17 00:50:19.178860 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.05s 2026-04-17 00:50:19.178865 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2026-04-17 00:50:19.178868 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.99s 2026-04-17 00:50:19.178873 | orchestrator | 2026-04-17 00:50:19 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:19.178877 | orchestrator | 2026-04-17 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:22.228923 | orchestrator | 2026-04-17 00:50:22 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:22.231111 | orchestrator | 2026-04-17 00:50:22 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:22.234263 | orchestrator | 2026-04-17 00:50:22 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:22.234343 | orchestrator | 2026-04-17 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:25.274612 | orchestrator | 2026-04-17 00:50:25 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:25.278250 | orchestrator | 2026-04-17 00:50:25 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:25.279699 | orchestrator | 2026-04-17 00:50:25 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:25.280112 | orchestrator | 2026-04-17 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:28.314988 | orchestrator | 2026-04-17 00:50:28 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:28.315171 | orchestrator | 2026-04-17 00:50:28 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:28.315913 | orchestrator | 2026-04-17 00:50:28 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:28.315950 | orchestrator | 2026-04-17 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:31.362333 | orchestrator | 2026-04-17 00:50:31 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:31.366306 | orchestrator | 2026-04-17 00:50:31 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:31.369751 | orchestrator | 2026-04-17 00:50:31 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:31.369807 | orchestrator | 2026-04-17 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:34.407079 | orchestrator | 2026-04-17 00:50:34 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:34.409263 | orchestrator | 2026-04-17 00:50:34 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:34.411193 | orchestrator | 2026-04-17 00:50:34 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:34.411271 | orchestrator | 2026-04-17 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:37.454233 | orchestrator | 2026-04-17 00:50:37 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:37.454460 | orchestrator | 2026-04-17 00:50:37 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:37.457118 | orchestrator | 2026-04-17 00:50:37 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:37.457222 | orchestrator | 2026-04-17 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:40.502723 | orchestrator | 2026-04-17 00:50:40 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:40.502807 | orchestrator | 2026-04-17 00:50:40 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:40.503478 | orchestrator | 2026-04-17 00:50:40 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:40.503538 | orchestrator | 2026-04-17 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:43.537524 | orchestrator | 2026-04-17 00:50:43 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:43.539527 | orchestrator | 2026-04-17 00:50:43 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:43.541197 | orchestrator | 2026-04-17 00:50:43 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state STARTED 2026-04-17 00:50:43.541327 | orchestrator | 2026-04-17 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:46.575568 | orchestrator | 2026-04-17 00:50:46 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:46.576124 | orchestrator | 2026-04-17 00:50:46 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:46.580797 | orchestrator | 2026-04-17 00:50:46 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:50:46.582270 | orchestrator | 2026-04-17 00:50:46 | INFO  | Task 04b3f2f9-7fa6-486b-8593-3b38d3f1747c is in state SUCCESS 2026-04-17 00:50:46.582425 | orchestrator | 2026-04-17 00:50:46.584399 | orchestrator | 2026-04-17 00:50:46.584456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:50:46.584474 | orchestrator | 2026-04-17 00:50:46.584488 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:50:46.584504 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.288) 0:00:00.288 ********** 2026-04-17 00:50:46.584518 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:50:46.584534 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:50:46.584549 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:50:46.584563 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:46.584571 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:50:46.584580 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:50:46.584594 | orchestrator | 2026-04-17 00:50:46.584607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:50:46.584633 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.782) 0:00:01.071 ********** 2026-04-17 00:50:46.584647 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584661 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584675 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584689 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584702 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584717 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-17 00:50:46.584731 | orchestrator | 2026-04-17 00:50:46.584746 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-17 00:50:46.584760 | orchestrator | 2026-04-17 00:50:46.584775 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-17 00:50:46.584789 | orchestrator | Friday 17 April 2026 00:47:48 +0000 (0:00:00.773) 0:00:01.845 ********** 2026-04-17 00:50:46.584804 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:50:46.584814 | orchestrator | 2026-04-17 00:50:46.584823 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 00:50:46.584831 | orchestrator | Friday 17 April 2026 00:47:49 +0000 (0:00:01.034) 0:00:02.879 ********** 2026-04-17 00:50:46.584840 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-17 00:50:46.584849 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-17 00:50:46.584858 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-17 00:50:46.584866 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-17 00:50:46.584875 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-17 00:50:46.584883 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-17 00:50:46.584891 | orchestrator | 2026-04-17 00:50:46.584900 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 00:50:46.584908 | orchestrator | Friday 17 April 2026 00:47:51 +0000 (0:00:01.798) 0:00:04.678 ********** 2026-04-17 00:50:46.584917 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-17 00:50:46.584926 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-17 00:50:46.584934 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-17 00:50:46.584943 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-17 00:50:46.584978 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-17 00:50:46.584999 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-17 00:50:46.585018 | orchestrator | 2026-04-17 00:50:46.585032 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 00:50:46.585114 | orchestrator | Friday 17 April 2026 00:47:52 +0000 (0:00:01.454) 0:00:06.132 ********** 2026-04-17 00:50:46.585131 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-17 00:50:46.585147 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:50:46.585163 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-17 00:50:46.585176 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:50:46.585185 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-17 00:50:46.585193 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:50:46.585202 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-17 00:50:46.585210 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:46.585219 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-17 00:50:46.585227 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:50:46.585235 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-17 00:50:46.585244 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:50:46.585252 | orchestrator | 2026-04-17 00:50:46.585275 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-17 00:50:46.585284 | orchestrator | Friday 17 April 2026 00:47:53 +0000 (0:00:01.077) 0:00:07.209 ********** 2026-04-17 00:50:46.585292 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:50:46.585301 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:50:46.585309 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:50:46.585318 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:46.585326 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:50:46.585334 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:50:46.585343 | orchestrator | 2026-04-17 00:50:46.585351 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-17 00:50:46.585360 | orchestrator | Friday 17 April 2026 00:47:54 +0000 (0:00:00.597) 0:00:07.807 ********** 2026-04-17 00:50:46.585390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585575 | orchestrator | 2026-04-17 00:50:46.585589 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-17 00:50:46.585602 | orchestrator | Friday 17 April 2026 00:47:56 +0000 (0:00:01.529) 0:00:09.336 ********** 2026-04-17 00:50:46.585616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.585853 | orchestrator | 2026-04-17 00:50:46.585863 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-17 00:50:46.585872 | orchestrator | Friday 17 April 2026 00:47:59 +0000 (0:00:03.106) 0:00:12.443 ********** 2026-04-17 00:50:46.585881 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:50:46.585890 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:50:46.585911 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:50:46.585926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:50:46.585940 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:50:46.585954 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:50:46.585968 | orchestrator | 2026-04-17 00:50:46.585983 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-17 00:50:46.585998 | orchestrator | Friday 17 April 2026 00:47:59 +0000 (0:00:00.655) 0:00:13.099 ********** 2026-04-17 00:50:46.586014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-17 00:50:46.586248 | orchestrator | 2026-04-17 00:50:46.586257 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586266 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:02.382) 0:00:15.481 ********** 2026-04-17 00:50:46.586274 | orchestrator | 2026-04-17 00:50:46.586283 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586354 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:00.418) 0:00:15.900 ********** 2026-04-17 00:50:46.586374 | orchestrator | 2026-04-17 00:50:46.586383 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586391 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:00.470) 0:00:16.370 ********** 2026-04-17 00:50:46.586400 | orchestrator | 2026-04-17 00:50:46.586408 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586417 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:00.253) 0:00:16.624 ********** 2026-04-17 00:50:46.586426 | orchestrator | 2026-04-17 00:50:46.586435 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586443 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:00.373) 0:00:16.997 ********** 2026-04-17 00:50:46.586452 | orchestrator | 2026-04-17 00:50:46.586460 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-17 00:50:46.586469 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:00.283) 0:00:17.281 ********** 2026-04-17 00:50:46.586477 | orchestrator | 2026-04-17 00:50:46.586486 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-17 00:50:46.586494 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:00.289) 0:00:17.571 ********** 2026-04-17 00:50:46.586503 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:46.586512 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:50:46.586520 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:50:46.586529 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:50:46.586537 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:46.586546 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:46.586554 | orchestrator | 2026-04-17 00:50:46.586563 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-17 00:50:46.586572 | orchestrator | Friday 17 April 2026 00:48:13 +0000 (0:00:09.579) 0:00:27.150 ********** 2026-04-17 00:50:46.586580 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:50:46.586589 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:50:46.586598 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:50:46.586606 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:50:46.586614 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:50:46.586623 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:50:46.586631 | orchestrator | 2026-04-17 00:50:46.586640 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-17 00:50:46.586648 | orchestrator | Friday 17 April 2026 00:48:15 +0000 (0:00:01.724) 0:00:28.875 ********** 2026-04-17 00:50:46.586663 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:50:46.586672 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:50:46.586680 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:46.586689 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:46.586697 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:46.586706 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:50:46.586714 | orchestrator | 2026-04-17 00:50:46.586727 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-17 00:50:46.586736 | orchestrator | Friday 17 April 2026 00:48:19 +0000 (0:00:04.317) 0:00:33.193 ********** 2026-04-17 00:50:46.586744 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-17 00:50:46.586754 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-17 00:50:46.586762 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-17 00:50:46.586771 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-17 00:50:46.586779 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-17 00:50:46.586795 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-17 00:50:46.586804 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-17 00:50:46.586813 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-17 00:50:46.586822 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-17 00:50:46.586830 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-17 00:50:46.586839 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-17 00:50:46.586847 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-17 00:50:46.586856 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586864 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586873 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586882 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586890 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586899 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-17 00:50:46.586907 | orchestrator | 2026-04-17 00:50:46.586916 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-17 00:50:46.586925 | orchestrator | Friday 17 April 2026 00:48:27 +0000 (0:00:07.361) 0:00:40.554 ********** 2026-04-17 00:50:46.586934 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-17 00:50:46.586942 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:50:46.586951 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-17 00:50:46.586960 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-17 00:50:46.586968 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:50:46.586977 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:50:46.586985 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-17 00:50:46.586999 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-17 00:50:46.587008 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-17 00:50:46.587016 | orchestrator | 2026-04-17 00:50:46.587025 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-17 00:50:46.587034 | orchestrator | Friday 17 April 2026 00:48:30 +0000 (0:00:03.111) 0:00:43.666 ********** 2026-04-17 00:50:46.587058 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-17 00:50:46.587067 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:50:46.587076 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-17 00:50:46.587084 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-17 00:50:46.587093 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:50:46.587102 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:50:46.587110 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-17 00:50:46.587119 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-17 00:50:46.587127 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-17 00:50:46.587136 | orchestrator | 2026-04-17 00:50:46.587144 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-17 00:50:46.587153 | orchestrator | Friday 17 April 2026 00:48:34 +0000 (0:00:04.374) 0:00:48.040 ********** 2026-04-17 00:50:46.587161 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:50:46.587170 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:50:46.587179 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:50:46.587187 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:50:46.587196 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:50:46.587205 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:50:46.587213 | orchestrator | 2026-04-17 00:50:46.587222 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:50:46.587235 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:50:46.587245 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:50:46.587254 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:50:46.587262 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 00:50:46.587271 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 00:50:46.587285 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 00:50:46.587294 | orchestrator | 2026-04-17 00:50:46.587303 | orchestrator | 2026-04-17 00:50:46.587311 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:50:46.587320 | orchestrator | Friday 17 April 2026 00:50:44 +0000 (0:02:10.062) 0:02:58.102 ********** 2026-04-17 00:50:46.587329 | orchestrator | =============================================================================== 2026-04-17 00:50:46.587337 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------ 134.38s 2026-04-17 00:50:46.587346 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.58s 2026-04-17 00:50:46.587355 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.36s 2026-04-17 00:50:46.587363 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.37s 2026-04-17 00:50:46.587372 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.11s 2026-04-17 00:50:46.587389 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.11s 2026-04-17 00:50:46.587398 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.38s 2026-04-17 00:50:46.587406 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.09s 2026-04-17 00:50:46.587415 | orchestrator | module-load : Load modules ---------------------------------------------- 1.80s 2026-04-17 00:50:46.587423 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.72s 2026-04-17 00:50:46.587432 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.53s 2026-04-17 00:50:46.587440 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.45s 2026-04-17 00:50:46.587449 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.08s 2026-04-17 00:50:46.587457 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.03s 2026-04-17 00:50:46.587466 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-04-17 00:50:46.587474 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-04-17 00:50:46.587483 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.66s 2026-04-17 00:50:46.587491 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.60s 2026-04-17 00:50:46.587500 | orchestrator | 2026-04-17 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:49.615236 | orchestrator | 2026-04-17 00:50:49 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:49.615918 | orchestrator | 2026-04-17 00:50:49 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:49.617759 | orchestrator | 2026-04-17 00:50:49 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:50:49.617831 | orchestrator | 2026-04-17 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:52.640174 | orchestrator | 2026-04-17 00:50:52 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:52.640810 | orchestrator | 2026-04-17 00:50:52 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:52.641714 | orchestrator | 2026-04-17 00:50:52 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:50:52.642010 | orchestrator | 2026-04-17 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:55.669062 | orchestrator | 2026-04-17 00:50:55 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:55.672827 | orchestrator | 2026-04-17 00:50:55 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:55.677199 | orchestrator | 2026-04-17 00:50:55 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:50:55.677266 | orchestrator | 2026-04-17 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:50:58.708547 | orchestrator | 2026-04-17 00:50:58 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:50:58.708629 | orchestrator | 2026-04-17 00:50:58 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:50:58.709440 | orchestrator | 2026-04-17 00:50:58 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:50:58.709480 | orchestrator | 2026-04-17 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:01.746392 | orchestrator | 2026-04-17 00:51:01 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:01.746621 | orchestrator | 2026-04-17 00:51:01 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:01.761584 | orchestrator | 2026-04-17 00:51:01 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:01.761650 | orchestrator | 2026-04-17 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:04.802877 | orchestrator | 2026-04-17 00:51:04 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:04.804462 | orchestrator | 2026-04-17 00:51:04 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:04.806150 | orchestrator | 2026-04-17 00:51:04 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:04.806214 | orchestrator | 2026-04-17 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:07.840820 | orchestrator | 2026-04-17 00:51:07 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:07.842504 | orchestrator | 2026-04-17 00:51:07 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:07.844881 | orchestrator | 2026-04-17 00:51:07 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:07.844951 | orchestrator | 2026-04-17 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:10.884832 | orchestrator | 2026-04-17 00:51:10 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:10.886165 | orchestrator | 2026-04-17 00:51:10 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:10.887993 | orchestrator | 2026-04-17 00:51:10 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:10.888116 | orchestrator | 2026-04-17 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:13.916069 | orchestrator | 2026-04-17 00:51:13 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:13.917635 | orchestrator | 2026-04-17 00:51:13 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:13.919416 | orchestrator | 2026-04-17 00:51:13 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:13.919453 | orchestrator | 2026-04-17 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:16.959148 | orchestrator | 2026-04-17 00:51:16 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:16.961710 | orchestrator | 2026-04-17 00:51:16 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:16.963816 | orchestrator | 2026-04-17 00:51:16 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:16.964061 | orchestrator | 2026-04-17 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:20.011284 | orchestrator | 2026-04-17 00:51:20 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:20.011570 | orchestrator | 2026-04-17 00:51:20 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:20.014463 | orchestrator | 2026-04-17 00:51:20 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:20.014525 | orchestrator | 2026-04-17 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:23.063456 | orchestrator | 2026-04-17 00:51:23 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:23.064815 | orchestrator | 2026-04-17 00:51:23 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:23.066226 | orchestrator | 2026-04-17 00:51:23 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:23.066298 | orchestrator | 2026-04-17 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:26.117087 | orchestrator | 2026-04-17 00:51:26 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:26.119137 | orchestrator | 2026-04-17 00:51:26 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:26.121359 | orchestrator | 2026-04-17 00:51:26 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:26.121437 | orchestrator | 2026-04-17 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:29.177899 | orchestrator | 2026-04-17 00:51:29 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:29.177987 | orchestrator | 2026-04-17 00:51:29 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:29.183109 | orchestrator | 2026-04-17 00:51:29 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:29.183251 | orchestrator | 2026-04-17 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:32.236769 | orchestrator | 2026-04-17 00:51:32 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:32.237434 | orchestrator | 2026-04-17 00:51:32 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:32.238283 | orchestrator | 2026-04-17 00:51:32 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:32.238366 | orchestrator | 2026-04-17 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:35.279771 | orchestrator | 2026-04-17 00:51:35 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:35.279841 | orchestrator | 2026-04-17 00:51:35 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:35.281067 | orchestrator | 2026-04-17 00:51:35 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:35.281127 | orchestrator | 2026-04-17 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:38.323033 | orchestrator | 2026-04-17 00:51:38 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:38.323604 | orchestrator | 2026-04-17 00:51:38 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:38.326916 | orchestrator | 2026-04-17 00:51:38 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:38.328605 | orchestrator | 2026-04-17 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:41.370478 | orchestrator | 2026-04-17 00:51:41 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:41.372347 | orchestrator | 2026-04-17 00:51:41 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:41.374354 | orchestrator | 2026-04-17 00:51:41 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:41.374429 | orchestrator | 2026-04-17 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:44.407243 | orchestrator | 2026-04-17 00:51:44 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:44.407337 | orchestrator | 2026-04-17 00:51:44 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:44.407717 | orchestrator | 2026-04-17 00:51:44 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:44.408726 | orchestrator | 2026-04-17 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:47.444455 | orchestrator | 2026-04-17 00:51:47 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:47.444691 | orchestrator | 2026-04-17 00:51:47 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:47.446278 | orchestrator | 2026-04-17 00:51:47 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:47.446332 | orchestrator | 2026-04-17 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:50.477595 | orchestrator | 2026-04-17 00:51:50 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:50.478836 | orchestrator | 2026-04-17 00:51:50 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:50.480634 | orchestrator | 2026-04-17 00:51:50 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:50.480694 | orchestrator | 2026-04-17 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:53.508909 | orchestrator | 2026-04-17 00:51:53 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:53.509223 | orchestrator | 2026-04-17 00:51:53 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:53.510283 | orchestrator | 2026-04-17 00:51:53 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:53.510347 | orchestrator | 2026-04-17 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:56.538653 | orchestrator | 2026-04-17 00:51:56 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:56.539232 | orchestrator | 2026-04-17 00:51:56 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:56.541328 | orchestrator | 2026-04-17 00:51:56 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:56.541404 | orchestrator | 2026-04-17 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:51:59.583343 | orchestrator | 2026-04-17 00:51:59 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:51:59.583506 | orchestrator | 2026-04-17 00:51:59 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:51:59.583662 | orchestrator | 2026-04-17 00:51:59 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:51:59.584534 | orchestrator | 2026-04-17 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:02.620750 | orchestrator | 2026-04-17 00:52:02 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:02.621356 | orchestrator | 2026-04-17 00:52:02 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:02.625113 | orchestrator | 2026-04-17 00:52:02 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:02.625222 | orchestrator | 2026-04-17 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:05.656650 | orchestrator | 2026-04-17 00:52:05 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:05.657940 | orchestrator | 2026-04-17 00:52:05 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:05.658589 | orchestrator | 2026-04-17 00:52:05 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:05.658731 | orchestrator | 2026-04-17 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:08.700059 | orchestrator | 2026-04-17 00:52:08 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:08.700481 | orchestrator | 2026-04-17 00:52:08 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:08.701536 | orchestrator | 2026-04-17 00:52:08 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:08.701575 | orchestrator | 2026-04-17 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:11.738880 | orchestrator | 2026-04-17 00:52:11 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:11.740731 | orchestrator | 2026-04-17 00:52:11 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:11.743056 | orchestrator | 2026-04-17 00:52:11 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:11.743484 | orchestrator | 2026-04-17 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:14.782185 | orchestrator | 2026-04-17 00:52:14 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:14.784577 | orchestrator | 2026-04-17 00:52:14 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:14.786828 | orchestrator | 2026-04-17 00:52:14 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:14.786914 | orchestrator | 2026-04-17 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:17.828206 | orchestrator | 2026-04-17 00:52:17 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:17.830976 | orchestrator | 2026-04-17 00:52:17 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:17.831063 | orchestrator | 2026-04-17 00:52:17 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:17.831077 | orchestrator | 2026-04-17 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:20.872293 | orchestrator | 2026-04-17 00:52:20 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:20.873188 | orchestrator | 2026-04-17 00:52:20 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:20.875632 | orchestrator | 2026-04-17 00:52:20 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:20.875704 | orchestrator | 2026-04-17 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:23.929896 | orchestrator | 2026-04-17 00:52:23 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:23.932060 | orchestrator | 2026-04-17 00:52:23 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:23.934330 | orchestrator | 2026-04-17 00:52:23 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:23.934403 | orchestrator | 2026-04-17 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:26.983543 | orchestrator | 2026-04-17 00:52:26 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:26.985492 | orchestrator | 2026-04-17 00:52:26 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:26.987252 | orchestrator | 2026-04-17 00:52:26 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:26.987287 | orchestrator | 2026-04-17 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:30.016595 | orchestrator | 2026-04-17 00:52:30 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:30.017339 | orchestrator | 2026-04-17 00:52:30 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:30.019087 | orchestrator | 2026-04-17 00:52:30 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:30.019124 | orchestrator | 2026-04-17 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:33.054391 | orchestrator | 2026-04-17 00:52:33 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:33.056473 | orchestrator | 2026-04-17 00:52:33 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:33.056567 | orchestrator | 2026-04-17 00:52:33 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:33.056588 | orchestrator | 2026-04-17 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:36.103260 | orchestrator | 2026-04-17 00:52:36 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:36.105135 | orchestrator | 2026-04-17 00:52:36 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:36.105897 | orchestrator | 2026-04-17 00:52:36 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:36.105952 | orchestrator | 2026-04-17 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:39.145268 | orchestrator | 2026-04-17 00:52:39 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:39.146008 | orchestrator | 2026-04-17 00:52:39 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:39.147111 | orchestrator | 2026-04-17 00:52:39 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:39.147164 | orchestrator | 2026-04-17 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:42.178811 | orchestrator | 2026-04-17 00:52:42 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:42.179212 | orchestrator | 2026-04-17 00:52:42 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:42.180174 | orchestrator | 2026-04-17 00:52:42 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:42.180206 | orchestrator | 2026-04-17 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:45.238372 | orchestrator | 2026-04-17 00:52:45 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:45.239651 | orchestrator | 2026-04-17 00:52:45 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:45.242382 | orchestrator | 2026-04-17 00:52:45 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:45.242451 | orchestrator | 2026-04-17 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:48.289134 | orchestrator | 2026-04-17 00:52:48 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:48.292059 | orchestrator | 2026-04-17 00:52:48 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:48.295392 | orchestrator | 2026-04-17 00:52:48 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:48.295689 | orchestrator | 2026-04-17 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:51.336852 | orchestrator | 2026-04-17 00:52:51 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:51.337762 | orchestrator | 2026-04-17 00:52:51 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:51.339107 | orchestrator | 2026-04-17 00:52:51 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:51.339140 | orchestrator | 2026-04-17 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:54.380130 | orchestrator | 2026-04-17 00:52:54 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:54.380225 | orchestrator | 2026-04-17 00:52:54 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:54.380953 | orchestrator | 2026-04-17 00:52:54 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:54.381009 | orchestrator | 2026-04-17 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:52:57.423501 | orchestrator | 2026-04-17 00:52:57 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:52:57.423859 | orchestrator | 2026-04-17 00:52:57 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:52:57.424826 | orchestrator | 2026-04-17 00:52:57 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:52:57.424864 | orchestrator | 2026-04-17 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:00.460377 | orchestrator | 2026-04-17 00:53:00 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:00.462633 | orchestrator | 2026-04-17 00:53:00 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:00.464254 | orchestrator | 2026-04-17 00:53:00 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:53:00.464323 | orchestrator | 2026-04-17 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:03.505647 | orchestrator | 2026-04-17 00:53:03 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:03.506761 | orchestrator | 2026-04-17 00:53:03 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:03.508261 | orchestrator | 2026-04-17 00:53:03 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:53:03.508288 | orchestrator | 2026-04-17 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:06.552398 | orchestrator | 2026-04-17 00:53:06 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:06.554679 | orchestrator | 2026-04-17 00:53:06 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:06.556067 | orchestrator | 2026-04-17 00:53:06 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:53:06.556105 | orchestrator | 2026-04-17 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:09.604027 | orchestrator | 2026-04-17 00:53:09 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:09.605494 | orchestrator | 2026-04-17 00:53:09 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:09.605544 | orchestrator | 2026-04-17 00:53:09 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state STARTED 2026-04-17 00:53:09.605551 | orchestrator | 2026-04-17 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:12.650316 | orchestrator | 2026-04-17 00:53:12 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:12.652414 | orchestrator | 2026-04-17 00:53:12 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:12.656894 | orchestrator | 2026-04-17 00:53:12 | INFO  | Task 0dc5c192-3800-4721-85dd-42436f22ae11 is in state SUCCESS 2026-04-17 00:53:12.659327 | orchestrator | 2026-04-17 00:53:12.659585 | orchestrator | 2026-04-17 00:53:12.659608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:53:12.659619 | orchestrator | 2026-04-17 00:53:12.659644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:53:12.659654 | orchestrator | Friday 17 April 2026 00:50:47 +0000 (0:00:00.171) 0:00:00.171 ********** 2026-04-17 00:53:12.659663 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:53:12.659673 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:53:12.659682 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:53:12.659690 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.659698 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.659707 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.659715 | orchestrator | 2026-04-17 00:53:12.659724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:53:12.659732 | orchestrator | Friday 17 April 2026 00:50:48 +0000 (0:00:00.564) 0:00:00.735 ********** 2026-04-17 00:53:12.659741 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-17 00:53:12.659750 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-17 00:53:12.659759 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-17 00:53:12.659767 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-17 00:53:12.659776 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-17 00:53:12.659784 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-17 00:53:12.659792 | orchestrator | 2026-04-17 00:53:12.659801 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-17 00:53:12.659809 | orchestrator | 2026-04-17 00:53:12.659817 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-17 00:53:12.659827 | orchestrator | Friday 17 April 2026 00:50:49 +0000 (0:00:00.823) 0:00:01.559 ********** 2026-04-17 00:53:12.659836 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:12.659847 | orchestrator | 2026-04-17 00:53:12.659855 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-17 00:53:12.659883 | orchestrator | Friday 17 April 2026 00:50:50 +0000 (0:00:00.987) 0:00:02.547 ********** 2026-04-17 00:53:12.659893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.659974 | orchestrator | 2026-04-17 00:53:12.660000 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-17 00:53:12.660015 | orchestrator | Friday 17 April 2026 00:50:51 +0000 (0:00:01.383) 0:00:03.930 ********** 2026-04-17 00:53:12.660025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660092 | orchestrator | 2026-04-17 00:53:12.660102 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-17 00:53:12.660111 | orchestrator | Friday 17 April 2026 00:50:53 +0000 (0:00:01.623) 0:00:05.554 ********** 2026-04-17 00:53:12.660121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660185 | orchestrator | 2026-04-17 00:53:12.660194 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-17 00:53:12.660202 | orchestrator | Friday 17 April 2026 00:50:54 +0000 (0:00:01.334) 0:00:06.888 ********** 2026-04-17 00:53:12.660211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660270 | orchestrator | 2026-04-17 00:53:12.660282 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-17 00:53:12.660295 | orchestrator | Friday 17 April 2026 00:50:56 +0000 (0:00:01.520) 0:00:08.408 ********** 2026-04-17 00:53:12.660304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.660362 | orchestrator | 2026-04-17 00:53:12.660370 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-17 00:53:12.660379 | orchestrator | Friday 17 April 2026 00:50:57 +0000 (0:00:01.530) 0:00:09.939 ********** 2026-04-17 00:53:12.660388 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:53:12.660397 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.660406 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:53:12.660414 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:53:12.660423 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.660431 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.660439 | orchestrator | 2026-04-17 00:53:12.660448 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-17 00:53:12.660456 | orchestrator | Friday 17 April 2026 00:51:00 +0000 (0:00:02.855) 0:00:12.794 ********** 2026-04-17 00:53:12.660465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-17 00:53:12.660475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-17 00:53:12.660483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-17 00:53:12.660491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-17 00:53:12.660499 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-17 00:53:12.660507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-17 00:53:12.660514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660522 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660534 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-17 00:53:12.660573 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660591 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-17 00:53:12.660632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660650 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660659 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660676 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-17 00:53:12.660684 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660692 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660709 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660717 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660725 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660733 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-17 00:53:12.660742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660759 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660776 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 00:53:12.660785 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 00:53:12.660793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-17 00:53:12.660801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 00:53:12.660810 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-17 00:53:12.660819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 00:53:12.660827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-17 00:53:12.660837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-17 00:53:12.660845 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-17 00:53:12.660883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-17 00:53:12.660897 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-17 00:53:12.660912 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-17 00:53:12.660921 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 00:53:12.660928 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 00:53:12.660935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 00:53:12.660943 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-17 00:53:12.660951 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-17 00:53:12.660959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 00:53:12.660967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-17 00:53:12.660975 | orchestrator | 2026-04-17 00:53:12.660984 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.660992 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:22.105) 0:00:34.899 ********** 2026-04-17 00:53:12.661000 | orchestrator | 2026-04-17 00:53:12.661009 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.661017 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.064) 0:00:34.964 ********** 2026-04-17 00:53:12.661025 | orchestrator | 2026-04-17 00:53:12.661033 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.661042 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.064) 0:00:35.029 ********** 2026-04-17 00:53:12.661050 | orchestrator | 2026-04-17 00:53:12.661058 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.661067 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.065) 0:00:35.094 ********** 2026-04-17 00:53:12.661075 | orchestrator | 2026-04-17 00:53:12.661084 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.661092 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.068) 0:00:35.162 ********** 2026-04-17 00:53:12.661100 | orchestrator | 2026-04-17 00:53:12.661109 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-17 00:53:12.661117 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.061) 0:00:35.224 ********** 2026-04-17 00:53:12.661126 | orchestrator | 2026-04-17 00:53:12.661134 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-17 00:53:12.661142 | orchestrator | Friday 17 April 2026 00:51:23 +0000 (0:00:00.084) 0:00:35.308 ********** 2026-04-17 00:53:12.661150 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:53:12.661159 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661167 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:53:12.661176 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:53:12.661184 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661192 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661200 | orchestrator | 2026-04-17 00:53:12.661209 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-17 00:53:12.661217 | orchestrator | Friday 17 April 2026 00:51:24 +0000 (0:00:01.875) 0:00:37.183 ********** 2026-04-17 00:53:12.661226 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.661234 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:53:12.661243 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:53:12.661251 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.661259 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.661267 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:53:12.661281 | orchestrator | 2026-04-17 00:53:12.661289 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-17 00:53:12.661298 | orchestrator | 2026-04-17 00:53:12.661306 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 00:53:12.661314 | orchestrator | Friday 17 April 2026 00:51:52 +0000 (0:00:27.084) 0:01:04.268 ********** 2026-04-17 00:53:12.661323 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:12.661331 | orchestrator | 2026-04-17 00:53:12.661340 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 00:53:12.661348 | orchestrator | Friday 17 April 2026 00:51:52 +0000 (0:00:00.459) 0:01:04.727 ********** 2026-04-17 00:53:12.661356 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:12.661365 | orchestrator | 2026-04-17 00:53:12.661373 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-17 00:53:12.661382 | orchestrator | Friday 17 April 2026 00:51:53 +0000 (0:00:00.626) 0:01:05.354 ********** 2026-04-17 00:53:12.661390 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661399 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661407 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661415 | orchestrator | 2026-04-17 00:53:12.661424 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-17 00:53:12.661432 | orchestrator | Friday 17 April 2026 00:51:53 +0000 (0:00:00.802) 0:01:06.156 ********** 2026-04-17 00:53:12.661441 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661450 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661458 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661471 | orchestrator | 2026-04-17 00:53:12.661480 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-17 00:53:12.661498 | orchestrator | Friday 17 April 2026 00:51:54 +0000 (0:00:00.289) 0:01:06.445 ********** 2026-04-17 00:53:12.661506 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661513 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661521 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661529 | orchestrator | 2026-04-17 00:53:12.661537 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-17 00:53:12.661546 | orchestrator | Friday 17 April 2026 00:51:54 +0000 (0:00:00.362) 0:01:06.808 ********** 2026-04-17 00:53:12.661554 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661562 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661570 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661579 | orchestrator | 2026-04-17 00:53:12.661587 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-17 00:53:12.661596 | orchestrator | Friday 17 April 2026 00:51:54 +0000 (0:00:00.270) 0:01:07.079 ********** 2026-04-17 00:53:12.661604 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.661613 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.661621 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.661630 | orchestrator | 2026-04-17 00:53:12.661638 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-17 00:53:12.661646 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:00.269) 0:01:07.348 ********** 2026-04-17 00:53:12.661655 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661663 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661671 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661680 | orchestrator | 2026-04-17 00:53:12.661688 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-17 00:53:12.661696 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:00.275) 0:01:07.623 ********** 2026-04-17 00:53:12.661705 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661713 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661730 | orchestrator | 2026-04-17 00:53:12.661738 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-17 00:53:12.661752 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:00.235) 0:01:07.859 ********** 2026-04-17 00:53:12.661761 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661769 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661778 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661787 | orchestrator | 2026-04-17 00:53:12.661795 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-17 00:53:12.661803 | orchestrator | Friday 17 April 2026 00:51:56 +0000 (0:00:00.542) 0:01:08.401 ********** 2026-04-17 00:53:12.661812 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661820 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661828 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661836 | orchestrator | 2026-04-17 00:53:12.661845 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-17 00:53:12.661853 | orchestrator | Friday 17 April 2026 00:51:56 +0000 (0:00:00.427) 0:01:08.828 ********** 2026-04-17 00:53:12.661911 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661920 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661929 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661937 | orchestrator | 2026-04-17 00:53:12.661945 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-17 00:53:12.661954 | orchestrator | Friday 17 April 2026 00:51:56 +0000 (0:00:00.280) 0:01:09.108 ********** 2026-04-17 00:53:12.661962 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.661970 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.661978 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.661986 | orchestrator | 2026-04-17 00:53:12.661994 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-17 00:53:12.662003 | orchestrator | Friday 17 April 2026 00:51:57 +0000 (0:00:00.274) 0:01:09.383 ********** 2026-04-17 00:53:12.662012 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662073 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662081 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662089 | orchestrator | 2026-04-17 00:53:12.662098 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-17 00:53:12.662106 | orchestrator | Friday 17 April 2026 00:51:57 +0000 (0:00:00.555) 0:01:09.938 ********** 2026-04-17 00:53:12.662114 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662122 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662138 | orchestrator | 2026-04-17 00:53:12.662146 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-17 00:53:12.662155 | orchestrator | Friday 17 April 2026 00:51:57 +0000 (0:00:00.308) 0:01:10.247 ********** 2026-04-17 00:53:12.662163 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662171 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662180 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662189 | orchestrator | 2026-04-17 00:53:12.662197 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-17 00:53:12.662206 | orchestrator | Friday 17 April 2026 00:51:58 +0000 (0:00:00.325) 0:01:10.573 ********** 2026-04-17 00:53:12.662214 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662223 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662232 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662240 | orchestrator | 2026-04-17 00:53:12.662248 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-17 00:53:12.662256 | orchestrator | Friday 17 April 2026 00:51:58 +0000 (0:00:00.271) 0:01:10.844 ********** 2026-04-17 00:53:12.662265 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662273 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662282 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662290 | orchestrator | 2026-04-17 00:53:12.662298 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-17 00:53:12.662314 | orchestrator | Friday 17 April 2026 00:51:59 +0000 (0:00:00.598) 0:01:11.442 ********** 2026-04-17 00:53:12.662323 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662331 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662349 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662358 | orchestrator | 2026-04-17 00:53:12.662366 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-17 00:53:12.662375 | orchestrator | Friday 17 April 2026 00:51:59 +0000 (0:00:00.341) 0:01:11.784 ********** 2026-04-17 00:53:12.662384 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:12.662392 | orchestrator | 2026-04-17 00:53:12.662401 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-17 00:53:12.662409 | orchestrator | Friday 17 April 2026 00:52:00 +0000 (0:00:00.572) 0:01:12.356 ********** 2026-04-17 00:53:12.662417 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.662426 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.662434 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.662442 | orchestrator | 2026-04-17 00:53:12.662450 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-17 00:53:12.662459 | orchestrator | Friday 17 April 2026 00:52:00 +0000 (0:00:00.725) 0:01:13.082 ********** 2026-04-17 00:53:12.662467 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.662476 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.662484 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.662492 | orchestrator | 2026-04-17 00:53:12.662500 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-17 00:53:12.662507 | orchestrator | Friday 17 April 2026 00:52:01 +0000 (0:00:00.515) 0:01:13.598 ********** 2026-04-17 00:53:12.662514 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662521 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662529 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662537 | orchestrator | 2026-04-17 00:53:12.662581 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-17 00:53:12.662590 | orchestrator | Friday 17 April 2026 00:52:01 +0000 (0:00:00.317) 0:01:13.915 ********** 2026-04-17 00:53:12.662598 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662606 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662615 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662623 | orchestrator | 2026-04-17 00:53:12.662632 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-17 00:53:12.662640 | orchestrator | Friday 17 April 2026 00:52:01 +0000 (0:00:00.317) 0:01:14.233 ********** 2026-04-17 00:53:12.662648 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662657 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662665 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662673 | orchestrator | 2026-04-17 00:53:12.662681 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-17 00:53:12.662690 | orchestrator | Friday 17 April 2026 00:52:02 +0000 (0:00:00.422) 0:01:14.656 ********** 2026-04-17 00:53:12.662698 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662707 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662715 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662723 | orchestrator | 2026-04-17 00:53:12.662731 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-17 00:53:12.662740 | orchestrator | Friday 17 April 2026 00:52:02 +0000 (0:00:00.299) 0:01:14.955 ********** 2026-04-17 00:53:12.662748 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662757 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662765 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662773 | orchestrator | 2026-04-17 00:53:12.662781 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-17 00:53:12.662789 | orchestrator | Friday 17 April 2026 00:52:02 +0000 (0:00:00.265) 0:01:15.221 ********** 2026-04-17 00:53:12.662804 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.662812 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.662821 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.662829 | orchestrator | 2026-04-17 00:53:12.662837 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-17 00:53:12.662845 | orchestrator | Friday 17 April 2026 00:52:03 +0000 (0:00:00.280) 0:01:15.501 ********** 2026-04-17 00:53:12.662855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.662974 | orchestrator | 2026-04-17 00:53:12.662982 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-17 00:53:12.662990 | orchestrator | Friday 17 April 2026 00:52:04 +0000 (0:00:01.428) 0:01:16.929 ********** 2026-04-17 00:53:12.662998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663083 | orchestrator | 2026-04-17 00:53:12.663091 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-17 00:53:12.663099 | orchestrator | Friday 17 April 2026 00:52:08 +0000 (0:00:03.837) 0:01:20.767 ********** 2026-04-17 00:53:12.663107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663191 | orchestrator | 2026-04-17 00:53:12.663199 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.663207 | orchestrator | Friday 17 April 2026 00:52:10 +0000 (0:00:02.180) 0:01:22.948 ********** 2026-04-17 00:53:12.663215 | orchestrator | 2026-04-17 00:53:12.663222 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.663230 | orchestrator | Friday 17 April 2026 00:52:10 +0000 (0:00:00.066) 0:01:23.015 ********** 2026-04-17 00:53:12.663237 | orchestrator | 2026-04-17 00:53:12.663245 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.663253 | orchestrator | Friday 17 April 2026 00:52:10 +0000 (0:00:00.059) 0:01:23.074 ********** 2026-04-17 00:53:12.663260 | orchestrator | 2026-04-17 00:53:12.663268 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-17 00:53:12.663276 | orchestrator | Friday 17 April 2026 00:52:10 +0000 (0:00:00.069) 0:01:23.143 ********** 2026-04-17 00:53:12.663283 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.663290 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.663298 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.663306 | orchestrator | 2026-04-17 00:53:12.663314 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-17 00:53:12.663321 | orchestrator | Friday 17 April 2026 00:52:17 +0000 (0:00:06.812) 0:01:29.955 ********** 2026-04-17 00:53:12.663329 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.663336 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.663344 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.663352 | orchestrator | 2026-04-17 00:53:12.663359 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-17 00:53:12.663367 | orchestrator | Friday 17 April 2026 00:52:25 +0000 (0:00:07.747) 0:01:37.702 ********** 2026-04-17 00:53:12.663375 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.663383 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.663391 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.663399 | orchestrator | 2026-04-17 00:53:12.663406 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-17 00:53:12.663414 | orchestrator | Friday 17 April 2026 00:52:27 +0000 (0:00:02.522) 0:01:40.225 ********** 2026-04-17 00:53:12.663422 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.663429 | orchestrator | 2026-04-17 00:53:12.663437 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-17 00:53:12.663444 | orchestrator | Friday 17 April 2026 00:52:28 +0000 (0:00:00.123) 0:01:40.349 ********** 2026-04-17 00:53:12.663452 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.663460 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.663467 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.663475 | orchestrator | 2026-04-17 00:53:12.663483 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-17 00:53:12.663491 | orchestrator | Friday 17 April 2026 00:52:28 +0000 (0:00:00.784) 0:01:41.133 ********** 2026-04-17 00:53:12.663498 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.663505 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.663511 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.663517 | orchestrator | 2026-04-17 00:53:12.663524 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-17 00:53:12.663530 | orchestrator | Friday 17 April 2026 00:52:29 +0000 (0:00:00.572) 0:01:41.706 ********** 2026-04-17 00:53:12.663537 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.663544 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.663552 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.663559 | orchestrator | 2026-04-17 00:53:12.663567 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-17 00:53:12.663574 | orchestrator | Friday 17 April 2026 00:52:30 +0000 (0:00:00.861) 0:01:42.568 ********** 2026-04-17 00:53:12.663582 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.663596 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.663603 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.663611 | orchestrator | 2026-04-17 00:53:12.663619 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-17 00:53:12.663626 | orchestrator | Friday 17 April 2026 00:52:30 +0000 (0:00:00.592) 0:01:43.160 ********** 2026-04-17 00:53:12.663634 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.663643 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.663656 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.663664 | orchestrator | 2026-04-17 00:53:12.663675 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-17 00:53:12.663683 | orchestrator | Friday 17 April 2026 00:52:31 +0000 (0:00:00.909) 0:01:44.070 ********** 2026-04-17 00:53:12.663692 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.663699 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.663707 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.663715 | orchestrator | 2026-04-17 00:53:12.663723 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-17 00:53:12.663731 | orchestrator | Friday 17 April 2026 00:52:32 +0000 (0:00:00.801) 0:01:44.871 ********** 2026-04-17 00:53:12.663739 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.663747 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.663755 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.663762 | orchestrator | 2026-04-17 00:53:12.663770 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-17 00:53:12.663777 | orchestrator | Friday 17 April 2026 00:52:33 +0000 (0:00:00.490) 0:01:45.362 ********** 2026-04-17 00:53:12.663785 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663801 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663818 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663841 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663850 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663889 | orchestrator | 2026-04-17 00:53:12.663901 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-17 00:53:12.663909 | orchestrator | Friday 17 April 2026 00:52:34 +0000 (0:00:01.549) 0:01:46.911 ********** 2026-04-17 00:53:12.663917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.663992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664001 | orchestrator | 2026-04-17 00:53:12.664008 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-17 00:53:12.664016 | orchestrator | Friday 17 April 2026 00:52:38 +0000 (0:00:04.298) 0:01:51.210 ********** 2026-04-17 00:53:12.664034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 00:53:12.664110 | orchestrator | 2026-04-17 00:53:12.664119 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.664126 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:02.444) 0:01:53.654 ********** 2026-04-17 00:53:12.664134 | orchestrator | 2026-04-17 00:53:12.664142 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.664149 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:00.078) 0:01:53.733 ********** 2026-04-17 00:53:12.664157 | orchestrator | 2026-04-17 00:53:12.664165 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-17 00:53:12.664172 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:00.264) 0:01:53.997 ********** 2026-04-17 00:53:12.664180 | orchestrator | 2026-04-17 00:53:12.664187 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-17 00:53:12.664195 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:00.115) 0:01:54.113 ********** 2026-04-17 00:53:12.664202 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.664210 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.664222 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.664230 | orchestrator | 2026-04-17 00:53:12.664241 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-17 00:53:12.664250 | orchestrator | Friday 17 April 2026 00:52:49 +0000 (0:00:07.802) 0:02:01.915 ********** 2026-04-17 00:53:12.664258 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.664265 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.664273 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.664281 | orchestrator | 2026-04-17 00:53:12.664288 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-17 00:53:12.664296 | orchestrator | Friday 17 April 2026 00:52:57 +0000 (0:00:07.918) 0:02:09.834 ********** 2026-04-17 00:53:12.664303 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.664311 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.664319 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:12.664326 | orchestrator | 2026-04-17 00:53:12.664334 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-17 00:53:12.664341 | orchestrator | Friday 17 April 2026 00:53:05 +0000 (0:00:07.626) 0:02:17.460 ********** 2026-04-17 00:53:12.664349 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.664357 | orchestrator | 2026-04-17 00:53:12.664364 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-17 00:53:12.664372 | orchestrator | Friday 17 April 2026 00:53:05 +0000 (0:00:00.121) 0:02:17.582 ********** 2026-04-17 00:53:12.664380 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.664388 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.664396 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.664403 | orchestrator | 2026-04-17 00:53:12.664415 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-17 00:53:12.664422 | orchestrator | Friday 17 April 2026 00:53:06 +0000 (0:00:00.971) 0:02:18.553 ********** 2026-04-17 00:53:12.664430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.664438 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:12.664445 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:12.664453 | orchestrator | 2026-04-17 00:53:12.664460 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-17 00:53:12.664468 | orchestrator | Friday 17 April 2026 00:53:07 +0000 (0:00:00.771) 0:02:19.325 ********** 2026-04-17 00:53:12.664476 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.664484 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.664492 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.664499 | orchestrator | 2026-04-17 00:53:12.664505 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-17 00:53:12.664512 | orchestrator | Friday 17 April 2026 00:53:07 +0000 (0:00:00.781) 0:02:20.107 ********** 2026-04-17 00:53:12.664519 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:12.664526 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:12.664534 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:12.664541 | orchestrator | 2026-04-17 00:53:12.664549 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-17 00:53:12.664557 | orchestrator | Friday 17 April 2026 00:53:08 +0000 (0:00:00.859) 0:02:20.967 ********** 2026-04-17 00:53:12.664564 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.664573 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.664581 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.664588 | orchestrator | 2026-04-17 00:53:12.664596 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-17 00:53:12.664604 | orchestrator | Friday 17 April 2026 00:53:09 +0000 (0:00:00.940) 0:02:21.908 ********** 2026-04-17 00:53:12.664611 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:12.664618 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:12.664626 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:12.664634 | orchestrator | 2026-04-17 00:53:12.664643 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:53:12.664650 | orchestrator | testbed-node-0 : ok=45  changed=20  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-17 00:53:12.664661 | orchestrator | testbed-node-1 : ok=44  changed=20  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 00:53:12.664668 | orchestrator | testbed-node-2 : ok=44  changed=20  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 00:53:12.664676 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:53:12.664684 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:53:12.664692 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:53:12.664700 | orchestrator | 2026-04-17 00:53:12.664707 | orchestrator | 2026-04-17 00:53:12.664715 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:53:12.664723 | orchestrator | Friday 17 April 2026 00:53:10 +0000 (0:00:00.932) 0:02:22.840 ********** 2026-04-17 00:53:12.664731 | orchestrator | =============================================================================== 2026-04-17 00:53:12.664738 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.08s 2026-04-17 00:53:12.664746 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.11s 2026-04-17 00:53:12.664754 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.67s 2026-04-17 00:53:12.664768 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.61s 2026-04-17 00:53:12.664776 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 10.15s 2026-04-17 00:53:12.664788 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.30s 2026-04-17 00:53:12.664796 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.84s 2026-04-17 00:53:12.664807 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.86s 2026-04-17 00:53:12.664815 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.44s 2026-04-17 00:53:12.664822 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.18s 2026-04-17 00:53:12.664830 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.88s 2026-04-17 00:53:12.664838 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.62s 2026-04-17 00:53:12.664845 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2026-04-17 00:53:12.664853 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.53s 2026-04-17 00:53:12.664895 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.52s 2026-04-17 00:53:12.664905 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-04-17 00:53:12.664913 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.38s 2026-04-17 00:53:12.664920 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.33s 2026-04-17 00:53:12.664929 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.99s 2026-04-17 00:53:12.664936 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 0.97s 2026-04-17 00:53:12.664944 | orchestrator | 2026-04-17 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:15.697721 | orchestrator | 2026-04-17 00:53:15 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:15.698011 | orchestrator | 2026-04-17 00:53:15 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:15.698069 | orchestrator | 2026-04-17 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:18.732554 | orchestrator | 2026-04-17 00:53:18 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:18.735230 | orchestrator | 2026-04-17 00:53:18 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:18.735560 | orchestrator | 2026-04-17 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:21.776045 | orchestrator | 2026-04-17 00:53:21 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:21.776408 | orchestrator | 2026-04-17 00:53:21 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:21.776546 | orchestrator | 2026-04-17 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:24.816519 | orchestrator | 2026-04-17 00:53:24 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:24.817234 | orchestrator | 2026-04-17 00:53:24 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:24.817402 | orchestrator | 2026-04-17 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:27.851107 | orchestrator | 2026-04-17 00:53:27 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:27.852615 | orchestrator | 2026-04-17 00:53:27 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:27.852668 | orchestrator | 2026-04-17 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:30.895100 | orchestrator | 2026-04-17 00:53:30 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:30.897113 | orchestrator | 2026-04-17 00:53:30 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:30.897189 | orchestrator | 2026-04-17 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:33.930064 | orchestrator | 2026-04-17 00:53:33 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:33.930331 | orchestrator | 2026-04-17 00:53:33 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:33.930356 | orchestrator | 2026-04-17 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:36.976555 | orchestrator | 2026-04-17 00:53:36 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:36.978056 | orchestrator | 2026-04-17 00:53:36 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:36.978136 | orchestrator | 2026-04-17 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:40.029053 | orchestrator | 2026-04-17 00:53:40 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:40.030220 | orchestrator | 2026-04-17 00:53:40 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:40.030294 | orchestrator | 2026-04-17 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:43.073483 | orchestrator | 2026-04-17 00:53:43 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:43.074215 | orchestrator | 2026-04-17 00:53:43 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:43.074422 | orchestrator | 2026-04-17 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:46.124843 | orchestrator | 2026-04-17 00:53:46 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:46.125770 | orchestrator | 2026-04-17 00:53:46 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:46.125853 | orchestrator | 2026-04-17 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:49.166691 | orchestrator | 2026-04-17 00:53:49 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state STARTED 2026-04-17 00:53:49.166775 | orchestrator | 2026-04-17 00:53:49 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:49.166863 | orchestrator | 2026-04-17 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:52.212457 | orchestrator | 2026-04-17 00:53:52 | INFO  | Task cced659e-bf99-4b5d-8020-02ae429a15d2 is in state SUCCESS 2026-04-17 00:53:52.212650 | orchestrator | 2026-04-17 00:53:52.214332 | orchestrator | 2026-04-17 00:53:52.214384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:53:52.214391 | orchestrator | 2026-04-17 00:53:52.214395 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:53:52.214400 | orchestrator | Friday 17 April 2026 00:47:46 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-04-17 00:53:52.214404 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.214409 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.214450 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.214497 | orchestrator | 2026-04-17 00:53:52.214504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:53:52.214511 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.280) 0:00:00.597 ********** 2026-04-17 00:53:52.214517 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-17 00:53:52.214556 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-17 00:53:52.214563 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-17 00:53:52.214569 | orchestrator | 2026-04-17 00:53:52.214576 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-17 00:53:52.214582 | orchestrator | 2026-04-17 00:53:52.214589 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 00:53:52.214595 | orchestrator | Friday 17 April 2026 00:47:47 +0000 (0:00:00.312) 0:00:00.910 ********** 2026-04-17 00:53:52.214603 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.214658 | orchestrator | 2026-04-17 00:53:52.214667 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-17 00:53:52.214673 | orchestrator | Friday 17 April 2026 00:47:48 +0000 (0:00:00.831) 0:00:01.741 ********** 2026-04-17 00:53:52.214680 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.214684 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.214688 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.214692 | orchestrator | 2026-04-17 00:53:52.214696 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 00:53:52.214700 | orchestrator | Friday 17 April 2026 00:47:49 +0000 (0:00:00.887) 0:00:02.628 ********** 2026-04-17 00:53:52.214704 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.214707 | orchestrator | 2026-04-17 00:53:52.214711 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-17 00:53:52.214715 | orchestrator | Friday 17 April 2026 00:47:49 +0000 (0:00:00.640) 0:00:03.269 ********** 2026-04-17 00:53:52.214719 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.214723 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.214726 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.214730 | orchestrator | 2026-04-17 00:53:52.214734 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-17 00:53:52.214772 | orchestrator | Friday 17 April 2026 00:47:50 +0000 (0:00:00.947) 0:00:04.216 ********** 2026-04-17 00:53:52.214777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214829 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-17 00:53:52.214837 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 00:53:52.214842 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 00:53:52.214846 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-17 00:53:52.214862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 00:53:52.214868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 00:53:52.214873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-17 00:53:52.214879 | orchestrator | 2026-04-17 00:53:52.214884 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 00:53:52.214889 | orchestrator | Friday 17 April 2026 00:47:53 +0000 (0:00:03.021) 0:00:07.238 ********** 2026-04-17 00:53:52.214896 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-17 00:53:52.214905 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-17 00:53:52.214942 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-17 00:53:52.214951 | orchestrator | 2026-04-17 00:53:52.214958 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 00:53:52.214965 | orchestrator | Friday 17 April 2026 00:47:54 +0000 (0:00:00.718) 0:00:07.957 ********** 2026-04-17 00:53:52.214972 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-17 00:53:52.214979 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-17 00:53:52.214986 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-17 00:53:52.214993 | orchestrator | 2026-04-17 00:53:52.215000 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 00:53:52.215007 | orchestrator | Friday 17 April 2026 00:47:55 +0000 (0:00:01.332) 0:00:09.289 ********** 2026-04-17 00:53:52.215014 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-17 00:53:52.215022 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.215044 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-17 00:53:52.215052 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.215059 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-17 00:53:52.215066 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.215073 | orchestrator | 2026-04-17 00:53:52.215114 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-17 00:53:52.215119 | orchestrator | Friday 17 April 2026 00:47:56 +0000 (0:00:00.916) 0:00:10.206 ********** 2026-04-17 00:53:52.215126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215251 | orchestrator | 2026-04-17 00:53:52.215258 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-17 00:53:52.215266 | orchestrator | Friday 17 April 2026 00:47:58 +0000 (0:00:01.996) 0:00:12.203 ********** 2026-04-17 00:53:52.215272 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.215279 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.215287 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.215294 | orchestrator | 2026-04-17 00:53:52.215301 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-17 00:53:52.215308 | orchestrator | Friday 17 April 2026 00:47:59 +0000 (0:00:00.926) 0:00:13.129 ********** 2026-04-17 00:53:52.215314 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-17 00:53:52.215318 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-17 00:53:52.215322 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-17 00:53:52.215325 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-17 00:53:52.215337 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-17 00:53:52.215343 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-17 00:53:52.215349 | orchestrator | 2026-04-17 00:53:52.215355 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-17 00:53:52.215361 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:02.445) 0:00:15.575 ********** 2026-04-17 00:53:52.215367 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.215373 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.215379 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.215386 | orchestrator | 2026-04-17 00:53:52.215392 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-17 00:53:52.215398 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:02.440) 0:00:18.016 ********** 2026-04-17 00:53:52.215405 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.215411 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.215417 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.215423 | orchestrator | 2026-04-17 00:53:52.215433 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-17 00:53:52.215437 | orchestrator | Friday 17 April 2026 00:48:06 +0000 (0:00:02.144) 0:00:20.160 ********** 2026-04-17 00:53:52.215441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.215452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.215456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215466 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.215470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.215481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.215488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215496 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.215505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.215509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.215513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215525 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.215529 | orchestrator | 2026-04-17 00:53:52.215532 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-17 00:53:52.215536 | orchestrator | Friday 17 April 2026 00:48:07 +0000 (0:00:00.553) 0:00:20.714 ********** 2026-04-17 00:53:52.215543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.215606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b', '__omit_place_holder__b6e027287d5e4f716c8875b92bbccd7f4365412b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-17 00:53:52.215710 | orchestrator | 2026-04-17 00:53:52.215714 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-17 00:53:52.215718 | orchestrator | Friday 17 April 2026 00:48:11 +0000 (0:00:03.712) 0:00:24.426 ********** 2026-04-17 00:53:52.215722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.215818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.215836 | orchestrator | 2026-04-17 00:53:52.215840 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-17 00:53:52.215844 | orchestrator | Friday 17 April 2026 00:48:15 +0000 (0:00:04.118) 0:00:28.545 ********** 2026-04-17 00:53:52.215863 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 00:53:52.215867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 00:53:52.215872 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-17 00:53:52.215879 | orchestrator | 2026-04-17 00:53:52.215925 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-17 00:53:52.215935 | orchestrator | Friday 17 April 2026 00:48:18 +0000 (0:00:02.975) 0:00:31.520 ********** 2026-04-17 00:53:52.215941 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 00:53:52.215948 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 00:53:52.215955 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-17 00:53:52.215961 | orchestrator | 2026-04-17 00:53:52.216738 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-17 00:53:52.216798 | orchestrator | Friday 17 April 2026 00:48:22 +0000 (0:00:04.368) 0:00:35.889 ********** 2026-04-17 00:53:52.216808 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.216814 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.216820 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.216825 | orchestrator | 2026-04-17 00:53:52.216831 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-17 00:53:52.216837 | orchestrator | Friday 17 April 2026 00:48:23 +0000 (0:00:01.222) 0:00:37.112 ********** 2026-04-17 00:53:52.216843 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 00:53:52.216850 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 00:53:52.216856 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-17 00:53:52.216875 | orchestrator | 2026-04-17 00:53:52.216881 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-17 00:53:52.216887 | orchestrator | Friday 17 April 2026 00:48:26 +0000 (0:00:02.849) 0:00:39.961 ********** 2026-04-17 00:53:52.216893 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 00:53:52.216900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 00:53:52.216906 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-17 00:53:52.216911 | orchestrator | 2026-04-17 00:53:52.216917 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-17 00:53:52.216922 | orchestrator | Friday 17 April 2026 00:48:29 +0000 (0:00:02.899) 0:00:42.860 ********** 2026-04-17 00:53:52.216929 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-17 00:53:52.216934 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-17 00:53:52.216940 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-17 00:53:52.216945 | orchestrator | 2026-04-17 00:53:52.216950 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-17 00:53:52.216955 | orchestrator | Friday 17 April 2026 00:48:31 +0000 (0:00:02.474) 0:00:45.335 ********** 2026-04-17 00:53:52.216961 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-17 00:53:52.216967 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-17 00:53:52.216972 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-17 00:53:52.217020 | orchestrator | 2026-04-17 00:53:52.217028 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-17 00:53:52.217034 | orchestrator | Friday 17 April 2026 00:48:34 +0000 (0:00:02.627) 0:00:47.962 ********** 2026-04-17 00:53:52.217040 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.217046 | orchestrator | 2026-04-17 00:53:52.217053 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-17 00:53:52.217060 | orchestrator | Friday 17 April 2026 00:48:37 +0000 (0:00:02.961) 0:00:50.924 ********** 2026-04-17 00:53:52.217073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.217156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.217168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.217181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.217187 | orchestrator | 2026-04-17 00:53:52.217254 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-17 00:53:52.217259 | orchestrator | Friday 17 April 2026 00:48:40 +0000 (0:00:03.270) 0:00:54.194 ********** 2026-04-17 00:53:52.217269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217281 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217304 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217324 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217327 | orchestrator | 2026-04-17 00:53:52.217331 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-17 00:53:52.217335 | orchestrator | Friday 17 April 2026 00:48:41 +0000 (0:00:00.570) 0:00:54.764 ********** 2026-04-17 00:53:52.217339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217357 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217379 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217401 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217405 | orchestrator | 2026-04-17 00:53:52.217409 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 00:53:52.217414 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:00.915) 0:00:55.680 ********** 2026-04-17 00:53:52.217421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217439 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217460 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217486 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217490 | orchestrator | 2026-04-17 00:53:52.217494 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 00:53:52.217499 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:00.631) 0:00:56.311 ********** 2026-04-17 00:53:52.217503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217521 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217541 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217592 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217612 | orchestrator | 2026-04-17 00:53:52.217617 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 00:53:52.217621 | orchestrator | Friday 17 April 2026 00:48:43 +0000 (0:00:00.641) 0:00:56.953 ********** 2026-04-17 00:53:52.217626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217682 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217702 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217707 | orchestrator | 2026-04-17 00:53:52.217711 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-17 00:53:52.217715 | orchestrator | Friday 17 April 2026 00:48:44 +0000 (0:00:01.108) 0:00:58.062 ********** 2026-04-17 00:53:52.217720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217741 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217758 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217840 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217846 | orchestrator | 2026-04-17 00:53:52.217852 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-17 00:53:52.217857 | orchestrator | Friday 17 April 2026 00:48:45 +0000 (0:00:00.600) 0:00:58.662 ********** 2026-04-17 00:53:52.217863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217881 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.217891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217921 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.217927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.217933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.217939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.217980 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.217989 | orchestrator | 2026-04-17 00:53:52.217995 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-17 00:53:52.217999 | orchestrator | Friday 17 April 2026 00:48:45 +0000 (0:00:00.498) 0:00:59.160 ********** 2026-04-17 00:53:52.218003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.218011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.218056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.218066 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.218075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.218079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.218083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.218087 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.218098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-17 00:53:52.218105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-17 00:53:52.218109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-17 00:53:52.218119 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.218123 | orchestrator | 2026-04-17 00:53:52.218127 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-17 00:53:52.218131 | orchestrator | Friday 17 April 2026 00:48:46 +0000 (0:00:01.127) 0:01:00.287 ********** 2026-04-17 00:53:52.218134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 00:53:52.218139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 00:53:52.218146 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-17 00:53:52.218150 | orchestrator | 2026-04-17 00:53:52.218153 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-17 00:53:52.218157 | orchestrator | Friday 17 April 2026 00:48:48 +0000 (0:00:01.401) 0:01:01.689 ********** 2026-04-17 00:53:52.218161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 00:53:52.218165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 00:53:52.218168 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-17 00:53:52.218172 | orchestrator | 2026-04-17 00:53:52.218176 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-17 00:53:52.218180 | orchestrator | Friday 17 April 2026 00:48:49 +0000 (0:00:01.298) 0:01:02.987 ********** 2026-04-17 00:53:52.218184 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 00:53:52.218187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 00:53:52.218191 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 00:53:52.218195 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 00:53:52.218199 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.218252 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 00:53:52.218257 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.218261 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 00:53:52.218265 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.218269 | orchestrator | 2026-04-17 00:53:52.218272 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-17 00:53:52.218276 | orchestrator | Friday 17 April 2026 00:48:50 +0000 (0:00:01.267) 0:01:04.254 ********** 2026-04-17 00:53:52.218280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-17 00:53:52.218316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.218320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.218324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-17 00:53:52.218331 | orchestrator | 2026-04-17 00:53:52.218337 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-17 00:53:52.218341 | orchestrator | Friday 17 April 2026 00:48:53 +0000 (0:00:02.651) 0:01:06.906 ********** 2026-04-17 00:53:52.218345 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.218349 | orchestrator | 2026-04-17 00:53:52.218353 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-17 00:53:52.218356 | orchestrator | Friday 17 April 2026 00:48:54 +0000 (0:00:00.553) 0:01:07.459 ********** 2026-04-17 00:53:52.218361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 00:53:52.218369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 00:53:52.218375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-17 00:53:52.218412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218426 | orchestrator | 2026-04-17 00:53:52.218430 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-17 00:53:52.218437 | orchestrator | Friday 17 April 2026 00:48:57 +0000 (0:00:03.039) 0:01:10.498 ********** 2026-04-17 00:53:52.218441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 00:53:52.218449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218461 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.218465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 00:53:52.218473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-17 00:53:52.218477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.218506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218525 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.218529 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.218533 | orchestrator | 2026-04-17 00:53:52.218537 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-17 00:53:52.218541 | orchestrator | Friday 17 April 2026 00:48:57 +0000 (0:00:00.570) 0:01:11.068 ********** 2026-04-17 00:53:52.218547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218557 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.218561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218568 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.218572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-17 00:53:52.218580 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.218584 | orchestrator | 2026-04-17 00:53:52.218590 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-17 00:53:52.218594 | orchestrator | Friday 17 April 2026 00:48:58 +0000 (0:00:00.763) 0:01:11.832 ********** 2026-04-17 00:53:52.218598 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.218602 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.218605 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.218609 | orchestrator | 2026-04-17 00:53:52.218613 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-17 00:53:52.218616 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:01.258) 0:01:13.090 ********** 2026-04-17 00:53:52.218620 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.218697 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.218720 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.218725 | orchestrator | 2026-04-17 00:53:52.218730 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-17 00:53:52.218736 | orchestrator | Friday 17 April 2026 00:49:01 +0000 (0:00:01.727) 0:01:14.818 ********** 2026-04-17 00:53:52.218746 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.218752 | orchestrator | 2026-04-17 00:53:52.218757 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-17 00:53:52.218763 | orchestrator | Friday 17 April 2026 00:49:01 +0000 (0:00:00.582) 0:01:15.400 ********** 2026-04-17 00:53:52.218771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.218778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.218864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.218886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218902 | orchestrator | 2026-04-17 00:53:52.218907 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-17 00:53:52.218910 | orchestrator | Friday 17 April 2026 00:49:05 +0000 (0:00:03.248) 0:01:18.649 ********** 2026-04-17 00:53:52.218918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.218926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218934 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.218938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.218963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218973 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.218980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.218988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.218996 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219000 | orchestrator | 2026-04-17 00:53:52.219004 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-17 00:53:52.219007 | orchestrator | Friday 17 April 2026 00:49:06 +0000 (0:00:01.174) 0:01:19.824 ********** 2026-04-17 00:53:52.219012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219020 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219034 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219046 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219050 | orchestrator | 2026-04-17 00:53:52.219060 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-17 00:53:52.219064 | orchestrator | Friday 17 April 2026 00:49:07 +0000 (0:00:00.828) 0:01:20.652 ********** 2026-04-17 00:53:52.219067 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.219071 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.219075 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.219079 | orchestrator | 2026-04-17 00:53:52.219082 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-17 00:53:52.219086 | orchestrator | Friday 17 April 2026 00:49:08 +0000 (0:00:01.176) 0:01:21.828 ********** 2026-04-17 00:53:52.219090 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.219094 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.219099 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.219104 | orchestrator | 2026-04-17 00:53:52.219179 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-17 00:53:52.219191 | orchestrator | Friday 17 April 2026 00:49:10 +0000 (0:00:02.001) 0:01:23.830 ********** 2026-04-17 00:53:52.219198 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219205 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219211 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219218 | orchestrator | 2026-04-17 00:53:52.219225 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-17 00:53:52.219229 | orchestrator | Friday 17 April 2026 00:49:10 +0000 (0:00:00.271) 0:01:24.102 ********** 2026-04-17 00:53:52.219233 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.219237 | orchestrator | 2026-04-17 00:53:52.219240 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-17 00:53:52.219244 | orchestrator | Friday 17 April 2026 00:49:11 +0000 (0:00:00.835) 0:01:24.937 ********** 2026-04-17 00:53:52.219248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 00:53:52.219255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 00:53:52.219264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-17 00:53:52.219273 | orchestrator | 2026-04-17 00:53:52.219277 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-17 00:53:52.219281 | orchestrator | Friday 17 April 2026 00:49:13 +0000 (0:00:02.389) 0:01:27.327 ********** 2026-04-17 00:53:52.219296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 00:53:52.219300 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 00:53:52.219308 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-17 00:53:52.219316 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219320 | orchestrator | 2026-04-17 00:53:52.219323 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-17 00:53:52.219327 | orchestrator | Friday 17 April 2026 00:49:15 +0000 (0:00:01.529) 0:01:28.856 ********** 2026-04-17 00:53:52.219332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219349 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219361 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-17 00:53:52.219376 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219380 | orchestrator | 2026-04-17 00:53:52.219384 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-17 00:53:52.219387 | orchestrator | Friday 17 April 2026 00:49:17 +0000 (0:00:01.912) 0:01:30.769 ********** 2026-04-17 00:53:52.219391 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219395 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219399 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219403 | orchestrator | 2026-04-17 00:53:52.219406 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-17 00:53:52.219410 | orchestrator | Friday 17 April 2026 00:49:17 +0000 (0:00:00.421) 0:01:31.190 ********** 2026-04-17 00:53:52.219414 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219418 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219422 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219425 | orchestrator | 2026-04-17 00:53:52.219429 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-17 00:53:52.219433 | orchestrator | Friday 17 April 2026 00:49:19 +0000 (0:00:01.329) 0:01:32.520 ********** 2026-04-17 00:53:52.219437 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.219441 | orchestrator | 2026-04-17 00:53:52.219444 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-17 00:53:52.219448 | orchestrator | Friday 17 April 2026 00:49:20 +0000 (0:00:00.948) 0:01:33.469 ********** 2026-04-17 00:53:52.219452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.219505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.219527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.219582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219601 | orchestrator | 2026-04-17 00:53:52.219605 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-17 00:53:52.219609 | orchestrator | Friday 17 April 2026 00:49:24 +0000 (0:00:04.333) 0:01:37.803 ********** 2026-04-17 00:53:52.219623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.219627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219647 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.219654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219669 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.219685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.219705 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219711 | orchestrator | 2026-04-17 00:53:52.219719 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-17 00:53:52.219759 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:00.880) 0:01:38.683 ********** 2026-04-17 00:53:52.219769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219805 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219822 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-17 00:53:52.219844 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219857 | orchestrator | 2026-04-17 00:53:52.219863 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-17 00:53:52.219869 | orchestrator | Friday 17 April 2026 00:49:26 +0000 (0:00:01.105) 0:01:39.789 ********** 2026-04-17 00:53:52.219872 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.219876 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.219880 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.219884 | orchestrator | 2026-04-17 00:53:52.219888 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-17 00:53:52.219891 | orchestrator | Friday 17 April 2026 00:49:27 +0000 (0:00:01.299) 0:01:41.089 ********** 2026-04-17 00:53:52.219896 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.219901 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.219907 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.219913 | orchestrator | 2026-04-17 00:53:52.219922 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-17 00:53:52.219929 | orchestrator | Friday 17 April 2026 00:49:29 +0000 (0:00:01.909) 0:01:42.999 ********** 2026-04-17 00:53:52.219936 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219941 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219946 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219953 | orchestrator | 2026-04-17 00:53:52.219958 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-17 00:53:52.219964 | orchestrator | Friday 17 April 2026 00:49:29 +0000 (0:00:00.265) 0:01:43.264 ********** 2026-04-17 00:53:52.219970 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.219976 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.219982 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.219988 | orchestrator | 2026-04-17 00:53:52.219994 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-17 00:53:52.220000 | orchestrator | Friday 17 April 2026 00:49:30 +0000 (0:00:00.254) 0:01:43.519 ********** 2026-04-17 00:53:52.220006 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.220012 | orchestrator | 2026-04-17 00:53:52.220018 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-17 00:53:52.220023 | orchestrator | Friday 17 April 2026 00:49:30 +0000 (0:00:00.828) 0:01:44.347 ********** 2026-04-17 00:53:52.220029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 00:53:52.220044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 00:53:52.220112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 00:53:52.220193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220242 | orchestrator | 2026-04-17 00:53:52.220248 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-17 00:53:52.220254 | orchestrator | Friday 17 April 2026 00:49:34 +0000 (0:00:03.742) 0:01:48.089 ********** 2026-04-17 00:53:52.220262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 00:53:52.220269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 00:53:52.220291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220398 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.220430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 00:53:52.220435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220439 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.220443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 00:53:52.220448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.220511 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.220518 | orchestrator | 2026-04-17 00:53:52.220525 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-17 00:53:52.220532 | orchestrator | Friday 17 April 2026 00:49:35 +0000 (0:00:00.880) 0:01:48.970 ********** 2026-04-17 00:53:52.220539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220553 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.220557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220565 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.220569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-17 00:53:52.220576 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.220584 | orchestrator | 2026-04-17 00:53:52.220588 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-17 00:53:52.220592 | orchestrator | Friday 17 April 2026 00:49:37 +0000 (0:00:01.688) 0:01:50.659 ********** 2026-04-17 00:53:52.220596 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.220600 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.220604 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.220607 | orchestrator | 2026-04-17 00:53:52.220611 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-17 00:53:52.220615 | orchestrator | Friday 17 April 2026 00:49:38 +0000 (0:00:01.157) 0:01:51.817 ********** 2026-04-17 00:53:52.220619 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.220623 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.220626 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.220630 | orchestrator | 2026-04-17 00:53:52.220634 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-17 00:53:52.220638 | orchestrator | Friday 17 April 2026 00:49:40 +0000 (0:00:02.026) 0:01:53.843 ********** 2026-04-17 00:53:52.220642 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.220646 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.220650 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.220654 | orchestrator | 2026-04-17 00:53:52.220657 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-17 00:53:52.220661 | orchestrator | Friday 17 April 2026 00:49:40 +0000 (0:00:00.219) 0:01:54.063 ********** 2026-04-17 00:53:52.220665 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.220668 | orchestrator | 2026-04-17 00:53:52.220676 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-17 00:53:52.220680 | orchestrator | Friday 17 April 2026 00:49:41 +0000 (0:00:00.892) 0:01:54.956 ********** 2026-04-17 00:53:52.220689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 00:53:52.220695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.220707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 00:53:52.220730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.220740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 00:53:52.221081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.221112 | orchestrator | 2026-04-17 00:53:52.221121 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-17 00:53:52.221127 | orchestrator | Friday 17 April 2026 00:49:46 +0000 (0:00:05.123) 0:02:00.079 ********** 2026-04-17 00:53:52.221234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 00:53:52.221255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.221268 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 00:53:52.221358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.221366 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 00:53:52.221385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.221390 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221393 | orchestrator | 2026-04-17 00:53:52.221397 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-17 00:53:52.221401 | orchestrator | Friday 17 April 2026 00:49:49 +0000 (0:00:02.993) 0:02:03.073 ********** 2026-04-17 00:53:52.221406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221417 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221429 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-17 00:53:52.221444 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221447 | orchestrator | 2026-04-17 00:53:52.221451 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-17 00:53:52.221455 | orchestrator | Friday 17 April 2026 00:49:53 +0000 (0:00:03.950) 0:02:07.023 ********** 2026-04-17 00:53:52.221459 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.221463 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.221467 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.221472 | orchestrator | 2026-04-17 00:53:52.221478 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-17 00:53:52.221487 | orchestrator | Friday 17 April 2026 00:49:54 +0000 (0:00:01.371) 0:02:08.394 ********** 2026-04-17 00:53:52.221496 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.221506 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.221516 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.221523 | orchestrator | 2026-04-17 00:53:52.221529 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-17 00:53:52.221533 | orchestrator | Friday 17 April 2026 00:49:57 +0000 (0:00:02.066) 0:02:10.460 ********** 2026-04-17 00:53:52.221537 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221540 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221544 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221548 | orchestrator | 2026-04-17 00:53:52.221552 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-17 00:53:52.221556 | orchestrator | Friday 17 April 2026 00:49:57 +0000 (0:00:00.261) 0:02:10.722 ********** 2026-04-17 00:53:52.221560 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.221563 | orchestrator | 2026-04-17 00:53:52.221567 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-17 00:53:52.221571 | orchestrator | Friday 17 April 2026 00:49:58 +0000 (0:00:00.887) 0:02:11.609 ********** 2026-04-17 00:53:52.221575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 00:53:52.221580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 00:53:52.221584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 00:53:52.221588 | orchestrator | 2026-04-17 00:53:52.221592 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-17 00:53:52.221595 | orchestrator | Friday 17 April 2026 00:50:01 +0000 (0:00:03.146) 0:02:14.756 ********** 2026-04-17 00:53:52.221603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 00:53:52.221610 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 00:53:52.221621 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 00:53:52.221629 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221633 | orchestrator | 2026-04-17 00:53:52.221637 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-17 00:53:52.221640 | orchestrator | Friday 17 April 2026 00:50:01 +0000 (0:00:00.384) 0:02:15.141 ********** 2026-04-17 00:53:52.221645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221653 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221665 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-17 00:53:52.221676 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221680 | orchestrator | 2026-04-17 00:53:52.221683 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-17 00:53:52.221687 | orchestrator | Friday 17 April 2026 00:50:02 +0000 (0:00:00.832) 0:02:15.973 ********** 2026-04-17 00:53:52.221691 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.221695 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.221700 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.221704 | orchestrator | 2026-04-17 00:53:52.221709 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-17 00:53:52.221759 | orchestrator | Friday 17 April 2026 00:50:03 +0000 (0:00:01.428) 0:02:17.401 ********** 2026-04-17 00:53:52.221765 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.221769 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.221776 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.221802 | orchestrator | 2026-04-17 00:53:52.221807 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-17 00:53:52.221811 | orchestrator | Friday 17 April 2026 00:50:05 +0000 (0:00:01.992) 0:02:19.394 ********** 2026-04-17 00:53:52.221815 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221819 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221824 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221828 | orchestrator | 2026-04-17 00:53:52.221832 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-17 00:53:52.221836 | orchestrator | Friday 17 April 2026 00:50:06 +0000 (0:00:00.286) 0:02:19.681 ********** 2026-04-17 00:53:52.221841 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.221845 | orchestrator | 2026-04-17 00:53:52.221850 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-17 00:53:52.221854 | orchestrator | Friday 17 April 2026 00:50:07 +0000 (0:00:01.081) 0:02:20.762 ********** 2026-04-17 00:53:52.221863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:53:52.221873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:53:52.221893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:53:52.221901 | orchestrator | 2026-04-17 00:53:52.221910 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-17 00:53:52.221916 | orchestrator | Friday 17 April 2026 00:50:10 +0000 (0:00:03.138) 0:02:23.900 ********** 2026-04-17 00:53:52.221936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:53:52.221944 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.221950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:53:52.221961 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.221979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:53:52.221985 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.221991 | orchestrator | 2026-04-17 00:53:52.221997 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-17 00:53:52.222605 | orchestrator | Friday 17 April 2026 00:50:11 +0000 (0:00:00.612) 0:02:24.513 ********** 2026-04-17 00:53:52.222635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 00:53:52.222671 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.222675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 00:53:52.222697 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.222702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-17 00:53:52.222724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-17 00:53:52.222728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-17 00:53:52.222732 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.222736 | orchestrator | 2026-04-17 00:53:52.222742 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-17 00:53:52.222757 | orchestrator | Friday 17 April 2026 00:50:12 +0000 (0:00:00.967) 0:02:25.481 ********** 2026-04-17 00:53:52.222765 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.222771 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.222778 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.222800 | orchestrator | 2026-04-17 00:53:52.222806 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-17 00:53:52.222812 | orchestrator | Friday 17 April 2026 00:50:13 +0000 (0:00:01.476) 0:02:26.957 ********** 2026-04-17 00:53:52.222817 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.222823 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.222828 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.222834 | orchestrator | 2026-04-17 00:53:52.222840 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-17 00:53:52.222846 | orchestrator | Friday 17 April 2026 00:50:15 +0000 (0:00:01.997) 0:02:28.954 ********** 2026-04-17 00:53:52.222852 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.222858 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.222864 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.222870 | orchestrator | 2026-04-17 00:53:52.222876 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-17 00:53:52.222882 | orchestrator | Friday 17 April 2026 00:50:15 +0000 (0:00:00.314) 0:02:29.268 ********** 2026-04-17 00:53:52.222888 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.222894 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.222899 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.222905 | orchestrator | 2026-04-17 00:53:52.222911 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-17 00:53:52.222915 | orchestrator | Friday 17 April 2026 00:50:16 +0000 (0:00:00.294) 0:02:29.563 ********** 2026-04-17 00:53:52.222919 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.222922 | orchestrator | 2026-04-17 00:53:52.222926 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-17 00:53:52.222930 | orchestrator | Friday 17 April 2026 00:50:17 +0000 (0:00:01.112) 0:02:30.675 ********** 2026-04-17 00:53:52.222940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:53:52.222950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.222956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.222965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:53:52.222969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.222976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.222980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:53:52.222988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.222995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.222999 | orchestrator | 2026-04-17 00:53:52.223003 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-17 00:53:52.223008 | orchestrator | Friday 17 April 2026 00:50:20 +0000 (0:00:03.306) 0:02:33.982 ********** 2026-04-17 00:53:52.223012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:53:52.223018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.223022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.223026 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.223035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:53:52.223043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.223047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.223050 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.223054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:53:52.223132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:53:52.223136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:53:52.223147 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.223151 | orchestrator | 2026-04-17 00:53:52.223157 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-17 00:53:52.223161 | orchestrator | Friday 17 April 2026 00:50:21 +0000 (0:00:00.585) 0:02:34.568 ********** 2026-04-17 00:53:52.223165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223174 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.223177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223185 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.223189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-17 00:53:52.223197 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.223201 | orchestrator | 2026-04-17 00:53:52.223205 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-17 00:53:52.223209 | orchestrator | Friday 17 April 2026 00:50:22 +0000 (0:00:01.019) 0:02:35.588 ********** 2026-04-17 00:53:52.223214 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.223218 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.223222 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.223226 | orchestrator | 2026-04-17 00:53:52.223230 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-17 00:53:52.223234 | orchestrator | Friday 17 April 2026 00:50:23 +0000 (0:00:01.445) 0:02:37.034 ********** 2026-04-17 00:53:52.223238 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.223243 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.223247 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.223251 | orchestrator | 2026-04-17 00:53:52.223255 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-17 00:53:52.223260 | orchestrator | Friday 17 April 2026 00:50:25 +0000 (0:00:02.069) 0:02:39.103 ********** 2026-04-17 00:53:52.223264 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.223268 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.223272 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.223276 | orchestrator | 2026-04-17 00:53:52.223280 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-17 00:53:52.223288 | orchestrator | Friday 17 April 2026 00:50:25 +0000 (0:00:00.266) 0:02:39.369 ********** 2026-04-17 00:53:52.223295 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.223299 | orchestrator | 2026-04-17 00:53:52.223303 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-17 00:53:52.223308 | orchestrator | Friday 17 April 2026 00:50:26 +0000 (0:00:01.016) 0:02:40.386 ********** 2026-04-17 00:53:52.223312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 00:53:52.223320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 00:53:52.223331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 00:53:52.223345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223350 | orchestrator | 2026-04-17 00:53:52.223354 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-17 00:53:52.223359 | orchestrator | Friday 17 April 2026 00:50:29 +0000 (0:00:03.017) 0:02:43.403 ********** 2026-04-17 00:53:52.223366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 00:53:52.223371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223375 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.223380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 00:53:52.223389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223394 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.223401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 00:53:52.223406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223410 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.223417 | orchestrator | 2026-04-17 00:53:52.223423 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-17 00:53:52.223429 | orchestrator | Friday 17 April 2026 00:50:30 +0000 (0:00:00.622) 0:02:44.026 ********** 2026-04-17 00:53:52.223440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223454 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.223460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223479 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.223486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-17 00:53:52.223499 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.223505 | orchestrator | 2026-04-17 00:53:52.223511 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-17 00:53:52.223518 | orchestrator | Friday 17 April 2026 00:50:31 +0000 (0:00:00.894) 0:02:44.920 ********** 2026-04-17 00:53:52.223525 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.223531 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.223536 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.223542 | orchestrator | 2026-04-17 00:53:52.223549 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-17 00:53:52.223555 | orchestrator | Friday 17 April 2026 00:50:32 +0000 (0:00:01.417) 0:02:46.338 ********** 2026-04-17 00:53:52.223561 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.223567 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.223579 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.223586 | orchestrator | 2026-04-17 00:53:52.223592 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-17 00:53:52.223599 | orchestrator | Friday 17 April 2026 00:50:34 +0000 (0:00:02.011) 0:02:48.349 ********** 2026-04-17 00:53:52.223606 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.223612 | orchestrator | 2026-04-17 00:53:52.223619 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-17 00:53:52.223625 | orchestrator | Friday 17 April 2026 00:50:35 +0000 (0:00:01.047) 0:02:49.397 ********** 2026-04-17 00:53:52.223637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 00:53:52.223644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.223669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 00:53:52.224473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-17 00:53:52.224556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224580 | orchestrator | 2026-04-17 00:53:52.224587 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-17 00:53:52.224593 | orchestrator | Friday 17 April 2026 00:50:39 +0000 (0:00:03.597) 0:02:52.994 ********** 2026-04-17 00:53:52.224605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 00:53:52.224613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 00:53:52.224624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224660 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.224671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224691 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.224698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-17 00:53:52.224705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.224727 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.224734 | orchestrator | 2026-04-17 00:53:52.224741 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-17 00:53:52.224748 | orchestrator | Friday 17 April 2026 00:50:40 +0000 (0:00:00.620) 0:02:53.614 ********** 2026-04-17 00:53:52.224754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224768 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.224819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224837 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.224841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-17 00:53:52.224849 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.224853 | orchestrator | 2026-04-17 00:53:52.224856 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-17 00:53:52.224860 | orchestrator | Friday 17 April 2026 00:50:41 +0000 (0:00:00.890) 0:02:54.505 ********** 2026-04-17 00:53:52.224864 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.224868 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.224871 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.224875 | orchestrator | 2026-04-17 00:53:52.224879 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-17 00:53:52.224882 | orchestrator | Friday 17 April 2026 00:50:42 +0000 (0:00:01.321) 0:02:55.827 ********** 2026-04-17 00:53:52.224886 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.224890 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.224893 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.224897 | orchestrator | 2026-04-17 00:53:52.224901 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-17 00:53:52.224905 | orchestrator | Friday 17 April 2026 00:50:44 +0000 (0:00:02.101) 0:02:57.928 ********** 2026-04-17 00:53:52.224908 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.224936 | orchestrator | 2026-04-17 00:53:52.224945 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-17 00:53:52.224950 | orchestrator | Friday 17 April 2026 00:50:45 +0000 (0:00:01.174) 0:02:59.103 ********** 2026-04-17 00:53:52.224957 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 00:53:52.224963 | orchestrator | 2026-04-17 00:53:52.224968 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-17 00:53:52.224974 | orchestrator | Friday 17 April 2026 00:50:48 +0000 (0:00:03.124) 0:03:02.228 ********** 2026-04-17 00:53:52.224982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225006 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225051 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225085 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225090 | orchestrator | 2026-04-17 00:53:52.225094 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-17 00:53:52.225098 | orchestrator | Friday 17 April 2026 00:50:51 +0000 (0:00:02.224) 0:03:04.453 ********** 2026-04-17 00:53:52.225106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225139 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:53:52.225157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-17 00:53:52.225162 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225166 | orchestrator | 2026-04-17 00:53:52.225171 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-17 00:53:52.225175 | orchestrator | Friday 17 April 2026 00:50:53 +0000 (0:00:02.330) 0:03:06.783 ********** 2026-04-17 00:53:52.225180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225189 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225209 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-17 00:53:52.225225 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225230 | orchestrator | 2026-04-17 00:53:52.225234 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-17 00:53:52.225238 | orchestrator | Friday 17 April 2026 00:50:55 +0000 (0:00:02.362) 0:03:09.146 ********** 2026-04-17 00:53:52.225243 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.225247 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.225251 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.225255 | orchestrator | 2026-04-17 00:53:52.225260 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-17 00:53:52.225264 | orchestrator | Friday 17 April 2026 00:50:57 +0000 (0:00:02.167) 0:03:11.313 ********** 2026-04-17 00:53:52.225269 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225273 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225277 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225282 | orchestrator | 2026-04-17 00:53:52.225286 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-17 00:53:52.225290 | orchestrator | Friday 17 April 2026 00:50:59 +0000 (0:00:01.472) 0:03:12.786 ********** 2026-04-17 00:53:52.225295 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225299 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225303 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225307 | orchestrator | 2026-04-17 00:53:52.225312 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-17 00:53:52.225316 | orchestrator | Friday 17 April 2026 00:50:59 +0000 (0:00:00.291) 0:03:13.077 ********** 2026-04-17 00:53:52.225320 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.225325 | orchestrator | 2026-04-17 00:53:52.225329 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-17 00:53:52.225333 | orchestrator | Friday 17 April 2026 00:51:00 +0000 (0:00:01.228) 0:03:14.306 ********** 2026-04-17 00:53:52.225338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 00:53:52.225349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 00:53:52.225354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-17 00:53:52.225359 | orchestrator | 2026-04-17 00:53:52.225363 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-17 00:53:52.225367 | orchestrator | Friday 17 April 2026 00:51:02 +0000 (0:00:02.004) 0:03:16.310 ********** 2026-04-17 00:53:52.225375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 00:53:52.225380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 00:53:52.225384 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-17 00:53:52.225401 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225405 | orchestrator | 2026-04-17 00:53:52.225410 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-17 00:53:52.225414 | orchestrator | Friday 17 April 2026 00:51:03 +0000 (0:00:00.344) 0:03:16.654 ********** 2026-04-17 00:53:52.225419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 00:53:52.225424 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 00:53:52.225436 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-17 00:53:52.225444 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225448 | orchestrator | 2026-04-17 00:53:52.225452 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-17 00:53:52.225456 | orchestrator | Friday 17 April 2026 00:51:03 +0000 (0:00:00.718) 0:03:17.373 ********** 2026-04-17 00:53:52.225460 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225463 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225467 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225472 | orchestrator | 2026-04-17 00:53:52.225478 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-17 00:53:52.225487 | orchestrator | Friday 17 April 2026 00:51:04 +0000 (0:00:00.390) 0:03:17.763 ********** 2026-04-17 00:53:52.225494 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225500 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225506 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225511 | orchestrator | 2026-04-17 00:53:52.225517 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-17 00:53:52.225523 | orchestrator | Friday 17 April 2026 00:51:05 +0000 (0:00:01.031) 0:03:18.795 ********** 2026-04-17 00:53:52.225530 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.225535 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.225542 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.225547 | orchestrator | 2026-04-17 00:53:52.225554 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-17 00:53:52.225558 | orchestrator | Friday 17 April 2026 00:51:05 +0000 (0:00:00.288) 0:03:19.084 ********** 2026-04-17 00:53:52.225562 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.225566 | orchestrator | 2026-04-17 00:53:52.225570 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-17 00:53:52.225573 | orchestrator | Friday 17 April 2026 00:51:07 +0000 (0:00:01.337) 0:03:20.421 ********** 2026-04-17 00:53:52.225577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 00:53:52.225622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.225647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.225676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.225706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.225710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 00:53:52.225717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 00:53:52.225731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.225755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.225804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.225828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.225958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.225962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.225977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.226038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.226114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226123 | orchestrator | 2026-04-17 00:53:52.226131 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-17 00:53:52.226135 | orchestrator | Friday 17 April 2026 00:51:10 +0000 (0:00:03.906) 0:03:24.328 ********** 2026-04-17 00:53:52.226139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 00:53:52.226143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 00:53:52.226155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.226242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.226289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 00:53:52.226527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.226633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.226690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226698 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.226705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-17 00:53:52.226711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226716 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.226722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-17 00:53:52.226882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.226892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-17 00:53:52.226935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-17 00:53:52.226946 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.226952 | orchestrator | 2026-04-17 00:53:52.226958 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-17 00:53:52.226964 | orchestrator | Friday 17 April 2026 00:51:12 +0000 (0:00:01.655) 0:03:25.983 ********** 2026-04-17 00:53:52.226971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.226977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.226984 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.226991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.226997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.227002 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.227020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-17 00:53:52.227026 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227032 | orchestrator | 2026-04-17 00:53:52.227039 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-17 00:53:52.227045 | orchestrator | Friday 17 April 2026 00:51:13 +0000 (0:00:01.350) 0:03:27.334 ********** 2026-04-17 00:53:52.227051 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227058 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227064 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227071 | orchestrator | 2026-04-17 00:53:52.227077 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-17 00:53:52.227084 | orchestrator | Friday 17 April 2026 00:51:15 +0000 (0:00:01.398) 0:03:28.733 ********** 2026-04-17 00:53:52.227089 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227096 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227102 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227106 | orchestrator | 2026-04-17 00:53:52.227110 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-17 00:53:52.227114 | orchestrator | Friday 17 April 2026 00:51:17 +0000 (0:00:02.045) 0:03:30.779 ********** 2026-04-17 00:53:52.227117 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.227121 | orchestrator | 2026-04-17 00:53:52.227125 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-17 00:53:52.227129 | orchestrator | Friday 17 April 2026 00:51:18 +0000 (0:00:01.496) 0:03:32.276 ********** 2026-04-17 00:53:52.227136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227173 | orchestrator | 2026-04-17 00:53:52.227177 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-17 00:53:52.227181 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:03.265) 0:03:35.542 ********** 2026-04-17 00:53:52.227184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227188 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227201 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227220 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227224 | orchestrator | 2026-04-17 00:53:52.227228 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-17 00:53:52.227235 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.502) 0:03:36.044 ********** 2026-04-17 00:53:52.227239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227248 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227259 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227271 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227275 | orchestrator | 2026-04-17 00:53:52.227278 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-17 00:53:52.227282 | orchestrator | Friday 17 April 2026 00:51:24 +0000 (0:00:01.431) 0:03:37.475 ********** 2026-04-17 00:53:52.227286 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227290 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227293 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227297 | orchestrator | 2026-04-17 00:53:52.227301 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-17 00:53:52.227305 | orchestrator | Friday 17 April 2026 00:51:25 +0000 (0:00:01.436) 0:03:38.912 ********** 2026-04-17 00:53:52.227308 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227312 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227316 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227319 | orchestrator | 2026-04-17 00:53:52.227323 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-17 00:53:52.227327 | orchestrator | Friday 17 April 2026 00:51:28 +0000 (0:00:02.541) 0:03:41.453 ********** 2026-04-17 00:53:52.227332 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.227336 | orchestrator | 2026-04-17 00:53:52.227340 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-17 00:53:52.227345 | orchestrator | Friday 17 April 2026 00:51:29 +0000 (0:00:01.869) 0:03:43.323 ********** 2026-04-17 00:53:52.227354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.227423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227432 | orchestrator | 2026-04-17 00:53:52.227436 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-17 00:53:52.227441 | orchestrator | Friday 17 April 2026 00:51:34 +0000 (0:00:04.669) 0:03:47.993 ********** 2026-04-17 00:53:52.227445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227492 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227597 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.227636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.227646 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227650 | orchestrator | 2026-04-17 00:53:52.227654 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-17 00:53:52.227659 | orchestrator | Friday 17 April 2026 00:51:35 +0000 (0:00:00.635) 0:03:48.629 ********** 2026-04-17 00:53:52.227664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227711 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227719 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-17 00:53:52.227750 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227754 | orchestrator | 2026-04-17 00:53:52.227758 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-17 00:53:52.227762 | orchestrator | Friday 17 April 2026 00:51:36 +0000 (0:00:00.957) 0:03:49.587 ********** 2026-04-17 00:53:52.227766 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227769 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227773 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227777 | orchestrator | 2026-04-17 00:53:52.227802 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-17 00:53:52.227806 | orchestrator | Friday 17 April 2026 00:51:38 +0000 (0:00:01.969) 0:03:51.556 ********** 2026-04-17 00:53:52.227810 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.227814 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.227817 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.227821 | orchestrator | 2026-04-17 00:53:52.227825 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-17 00:53:52.227829 | orchestrator | Friday 17 April 2026 00:51:40 +0000 (0:00:02.400) 0:03:53.957 ********** 2026-04-17 00:53:52.227832 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.227836 | orchestrator | 2026-04-17 00:53:52.227840 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-17 00:53:52.227844 | orchestrator | Friday 17 April 2026 00:51:41 +0000 (0:00:01.273) 0:03:55.230 ********** 2026-04-17 00:53:52.227848 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-17 00:53:52.227853 | orchestrator | 2026-04-17 00:53:52.227856 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-17 00:53:52.227860 | orchestrator | Friday 17 April 2026 00:51:43 +0000 (0:00:01.357) 0:03:56.588 ********** 2026-04-17 00:53:52.227864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 00:53:52.227872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 00:53:52.227876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-17 00:53:52.227880 | orchestrator | 2026-04-17 00:53:52.227887 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-17 00:53:52.227892 | orchestrator | Friday 17 April 2026 00:51:47 +0000 (0:00:03.919) 0:04:00.508 ********** 2026-04-17 00:53:52.227896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.227900 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.227922 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.227931 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.227937 | orchestrator | 2026-04-17 00:53:52.227943 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-17 00:53:52.227950 | orchestrator | Friday 17 April 2026 00:51:48 +0000 (0:00:01.108) 0:04:01.617 ********** 2026-04-17 00:53:52.227955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.227962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.227973 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.227979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.227986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.227992 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.227999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.228006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-17 00:53:52.228013 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228020 | orchestrator | 2026-04-17 00:53:52.228026 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 00:53:52.228033 | orchestrator | Friday 17 April 2026 00:51:49 +0000 (0:00:01.604) 0:04:03.222 ********** 2026-04-17 00:53:52.228039 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.228043 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.228047 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.228050 | orchestrator | 2026-04-17 00:53:52.228054 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 00:53:52.228058 | orchestrator | Friday 17 April 2026 00:51:52 +0000 (0:00:02.318) 0:04:05.540 ********** 2026-04-17 00:53:52.228062 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.228065 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.228069 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.228073 | orchestrator | 2026-04-17 00:53:52.228079 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-17 00:53:52.228083 | orchestrator | Friday 17 April 2026 00:51:54 +0000 (0:00:02.783) 0:04:08.323 ********** 2026-04-17 00:53:52.228087 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-17 00:53:52.228091 | orchestrator | 2026-04-17 00:53:52.228094 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-17 00:53:52.228098 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:00.765) 0:04:09.088 ********** 2026-04-17 00:53:52.228102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228107 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228144 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228148 | orchestrator | 2026-04-17 00:53:52.228151 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-17 00:53:52.228155 | orchestrator | Friday 17 April 2026 00:51:57 +0000 (0:00:01.429) 0:04:10.518 ********** 2026-04-17 00:53:52.228159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228163 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228171 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-17 00:53:52.228179 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228182 | orchestrator | 2026-04-17 00:53:52.228189 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-17 00:53:52.228192 | orchestrator | Friday 17 April 2026 00:51:58 +0000 (0:00:01.731) 0:04:12.249 ********** 2026-04-17 00:53:52.228196 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228200 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228204 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228207 | orchestrator | 2026-04-17 00:53:52.228211 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 00:53:52.228215 | orchestrator | Friday 17 April 2026 00:52:00 +0000 (0:00:01.549) 0:04:13.799 ********** 2026-04-17 00:53:52.228219 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.228223 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.228227 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.228231 | orchestrator | 2026-04-17 00:53:52.228234 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 00:53:52.228238 | orchestrator | Friday 17 April 2026 00:52:03 +0000 (0:00:02.885) 0:04:16.684 ********** 2026-04-17 00:53:52.228245 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.228249 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.228253 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.228257 | orchestrator | 2026-04-17 00:53:52.228260 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-17 00:53:52.228264 | orchestrator | Friday 17 April 2026 00:52:06 +0000 (0:00:02.724) 0:04:19.409 ********** 2026-04-17 00:53:52.228268 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-17 00:53:52.228272 | orchestrator | 2026-04-17 00:53:52.228287 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-17 00:53:52.228291 | orchestrator | Friday 17 April 2026 00:52:06 +0000 (0:00:00.762) 0:04:20.171 ********** 2026-04-17 00:53:52.228295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228300 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228307 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228315 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228319 | orchestrator | 2026-04-17 00:53:52.228323 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-17 00:53:52.228327 | orchestrator | Friday 17 April 2026 00:52:07 +0000 (0:00:01.220) 0:04:21.392 ********** 2026-04-17 00:53:52.228331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228334 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228348 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-17 00:53:52.228356 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228359 | orchestrator | 2026-04-17 00:53:52.228363 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-17 00:53:52.228367 | orchestrator | Friday 17 April 2026 00:52:09 +0000 (0:00:01.399) 0:04:22.791 ********** 2026-04-17 00:53:52.228370 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228378 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228382 | orchestrator | 2026-04-17 00:53:52.228386 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-17 00:53:52.228401 | orchestrator | Friday 17 April 2026 00:52:10 +0000 (0:00:01.539) 0:04:24.330 ********** 2026-04-17 00:53:52.228406 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.228410 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.228413 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.228417 | orchestrator | 2026-04-17 00:53:52.228421 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-17 00:53:52.228425 | orchestrator | Friday 17 April 2026 00:52:13 +0000 (0:00:02.629) 0:04:26.960 ********** 2026-04-17 00:53:52.228429 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.228433 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.228436 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.228440 | orchestrator | 2026-04-17 00:53:52.228444 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-17 00:53:52.228448 | orchestrator | Friday 17 April 2026 00:52:16 +0000 (0:00:03.137) 0:04:30.098 ********** 2026-04-17 00:53:52.228451 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.228455 | orchestrator | 2026-04-17 00:53:52.228459 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-17 00:53:52.228463 | orchestrator | Friday 17 April 2026 00:52:18 +0000 (0:00:01.342) 0:04:31.440 ********** 2026-04-17 00:53:52.228467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.228471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.228515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.228567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228593 | orchestrator | 2026-04-17 00:53:52.228598 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-17 00:53:52.228604 | orchestrator | Friday 17 April 2026 00:52:21 +0000 (0:00:03.642) 0:04:35.082 ********** 2026-04-17 00:53:52.228613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.228620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228668 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.228678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228716 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.228733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 00:53:52.228743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 00:53:52.228776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 00:53:52.228823 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228828 | orchestrator | 2026-04-17 00:53:52.228832 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-17 00:53:52.228837 | orchestrator | Friday 17 April 2026 00:52:22 +0000 (0:00:00.870) 0:04:35.953 ********** 2026-04-17 00:53:52.228841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228848 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.228852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228865 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.228868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-17 00:53:52.228876 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.228880 | orchestrator | 2026-04-17 00:53:52.228884 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-17 00:53:52.228888 | orchestrator | Friday 17 April 2026 00:52:23 +0000 (0:00:00.868) 0:04:36.822 ********** 2026-04-17 00:53:52.228891 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.228895 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.228900 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.228906 | orchestrator | 2026-04-17 00:53:52.228912 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-17 00:53:52.228918 | orchestrator | Friday 17 April 2026 00:52:24 +0000 (0:00:01.364) 0:04:38.187 ********** 2026-04-17 00:53:52.228924 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.228930 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.228936 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.228942 | orchestrator | 2026-04-17 00:53:52.228949 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-17 00:53:52.228954 | orchestrator | Friday 17 April 2026 00:52:27 +0000 (0:00:02.262) 0:04:40.449 ********** 2026-04-17 00:53:52.228961 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.228967 | orchestrator | 2026-04-17 00:53:52.228973 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-17 00:53:52.228979 | orchestrator | Friday 17 April 2026 00:52:28 +0000 (0:00:01.465) 0:04:41.914 ********** 2026-04-17 00:53:52.228990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:53:52.229019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:53:52.229028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:53:52.229033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:53:52.229041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:53:52.229057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:53:52.229065 | orchestrator | 2026-04-17 00:53:52.229069 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-17 00:53:52.229073 | orchestrator | Friday 17 April 2026 00:52:33 +0000 (0:00:05.024) 0:04:46.938 ********** 2026-04-17 00:53:52.229077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:53:52.229081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:53:52.229086 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:53:52.229122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:53:52.229134 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:53:52.229146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:53:52.229153 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229159 | orchestrator | 2026-04-17 00:53:52.229166 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-17 00:53:52.229173 | orchestrator | Friday 17 April 2026 00:52:34 +0000 (0:00:00.925) 0:04:47.863 ********** 2026-04-17 00:53:52.229179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 00:53:52.229186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229203 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 00:53:52.229212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229223 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-17 00:53:52.229230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-17 00:53:52.229253 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229257 | orchestrator | 2026-04-17 00:53:52.229261 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-17 00:53:52.229265 | orchestrator | Friday 17 April 2026 00:52:36 +0000 (0:00:01.580) 0:04:49.444 ********** 2026-04-17 00:53:52.229268 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229272 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229276 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229283 | orchestrator | 2026-04-17 00:53:52.229287 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-17 00:53:52.229290 | orchestrator | Friday 17 April 2026 00:52:36 +0000 (0:00:00.413) 0:04:49.857 ********** 2026-04-17 00:53:52.229294 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229298 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229302 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229305 | orchestrator | 2026-04-17 00:53:52.229309 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-17 00:53:52.229313 | orchestrator | Friday 17 April 2026 00:52:37 +0000 (0:00:01.198) 0:04:51.055 ********** 2026-04-17 00:53:52.229317 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.229321 | orchestrator | 2026-04-17 00:53:52.229324 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-17 00:53:52.229328 | orchestrator | Friday 17 April 2026 00:52:39 +0000 (0:00:01.469) 0:04:52.525 ********** 2026-04-17 00:53:52.229333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 00:53:52.229337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 00:53:52.229379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 00:53:52.229386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 00:53:52.229442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 00:53:52.229466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 00:53:52.229487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229521 | orchestrator | 2026-04-17 00:53:52.229525 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-17 00:53:52.229529 | orchestrator | Friday 17 April 2026 00:52:43 +0000 (0:00:04.248) 0:04:56.774 ********** 2026-04-17 00:53:52.229535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 00:53:52.229539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 00:53:52.229569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 00:53:52.229590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229594 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 00:53:52.229616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 00:53:52.229624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 00:53:52.229646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 00:53:52.229675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-17 00:53:52.229686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229690 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 00:53:52.229706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 00:53:52.229710 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229714 | orchestrator | 2026-04-17 00:53:52.229718 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-17 00:53:52.229721 | orchestrator | Friday 17 April 2026 00:52:44 +0000 (0:00:00.839) 0:04:57.614 ********** 2026-04-17 00:53:52.229725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229745 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229765 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-17 00:53:52.229797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-17 00:53:52.229809 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229812 | orchestrator | 2026-04-17 00:53:52.229816 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-17 00:53:52.229820 | orchestrator | Friday 17 April 2026 00:52:45 +0000 (0:00:01.261) 0:04:58.875 ********** 2026-04-17 00:53:52.229824 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229828 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229831 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229835 | orchestrator | 2026-04-17 00:53:52.229839 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-17 00:53:52.229842 | orchestrator | Friday 17 April 2026 00:52:45 +0000 (0:00:00.447) 0:04:59.322 ********** 2026-04-17 00:53:52.229846 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229850 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229854 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229857 | orchestrator | 2026-04-17 00:53:52.229861 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-17 00:53:52.229865 | orchestrator | Friday 17 April 2026 00:52:47 +0000 (0:00:01.401) 0:05:00.724 ********** 2026-04-17 00:53:52.229869 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.229872 | orchestrator | 2026-04-17 00:53:52.229876 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-17 00:53:52.229880 | orchestrator | Friday 17 April 2026 00:52:48 +0000 (0:00:01.482) 0:05:02.207 ********** 2026-04-17 00:53:52.229887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:53:52.229892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:53:52.229902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-17 00:53:52.229907 | orchestrator | 2026-04-17 00:53:52.229911 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-17 00:53:52.229914 | orchestrator | Friday 17 April 2026 00:52:51 +0000 (0:00:03.127) 0:05:05.334 ********** 2026-04-17 00:53:52.229918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 00:53:52.229923 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 00:53:52.229933 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-17 00:53:52.229945 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229948 | orchestrator | 2026-04-17 00:53:52.229954 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-17 00:53:52.229958 | orchestrator | Friday 17 April 2026 00:52:52 +0000 (0:00:00.423) 0:05:05.758 ********** 2026-04-17 00:53:52.229962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 00:53:52.229966 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.229970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 00:53:52.229974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.229978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-17 00:53:52.229982 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.229985 | orchestrator | 2026-04-17 00:53:52.229989 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-17 00:53:52.229993 | orchestrator | Friday 17 April 2026 00:52:52 +0000 (0:00:00.620) 0:05:06.379 ********** 2026-04-17 00:53:52.229997 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230001 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230005 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230008 | orchestrator | 2026-04-17 00:53:52.230039 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-17 00:53:52.230044 | orchestrator | Friday 17 April 2026 00:52:53 +0000 (0:00:00.885) 0:05:07.264 ********** 2026-04-17 00:53:52.230048 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230052 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230055 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230060 | orchestrator | 2026-04-17 00:53:52.230067 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-17 00:53:52.230090 | orchestrator | Friday 17 April 2026 00:52:55 +0000 (0:00:01.366) 0:05:08.630 ********** 2026-04-17 00:53:52.230097 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:53:52.230103 | orchestrator | 2026-04-17 00:53:52.230108 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-17 00:53:52.230114 | orchestrator | Friday 17 April 2026 00:52:56 +0000 (0:00:01.452) 0:05:10.083 ********** 2026-04-17 00:53:52.230120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-17 00:53:52.230182 | orchestrator | 2026-04-17 00:53:52.230188 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-17 00:53:52.230194 | orchestrator | Friday 17 April 2026 00:53:02 +0000 (0:00:06.147) 0:05:16.230 ********** 2026-04-17 00:53:52.230204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230217 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230241 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-17 00:53:52.230265 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230271 | orchestrator | 2026-04-17 00:53:52.230277 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-17 00:53:52.230283 | orchestrator | Friday 17 April 2026 00:53:03 +0000 (0:00:01.036) 0:05:17.267 ********** 2026-04-17 00:53:52.230290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230357 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230380 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-17 00:53:52.230402 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230406 | orchestrator | 2026-04-17 00:53:52.230409 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-17 00:53:52.230413 | orchestrator | Friday 17 April 2026 00:53:04 +0000 (0:00:00.906) 0:05:18.173 ********** 2026-04-17 00:53:52.230417 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.230421 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.230424 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.230428 | orchestrator | 2026-04-17 00:53:52.230435 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-17 00:53:52.230441 | orchestrator | Friday 17 April 2026 00:53:06 +0000 (0:00:01.256) 0:05:19.429 ********** 2026-04-17 00:53:52.230447 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.230455 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.230466 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.230472 | orchestrator | 2026-04-17 00:53:52.230478 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-17 00:53:52.230485 | orchestrator | Friday 17 April 2026 00:53:08 +0000 (0:00:02.285) 0:05:21.715 ********** 2026-04-17 00:53:52.230491 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230497 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230502 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230508 | orchestrator | 2026-04-17 00:53:52.230513 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-17 00:53:52.230519 | orchestrator | Friday 17 April 2026 00:53:08 +0000 (0:00:00.609) 0:05:22.325 ********** 2026-04-17 00:53:52.230525 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230531 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230537 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230543 | orchestrator | 2026-04-17 00:53:52.230550 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-17 00:53:52.230555 | orchestrator | Friday 17 April 2026 00:53:09 +0000 (0:00:00.310) 0:05:22.635 ********** 2026-04-17 00:53:52.230576 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230582 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230588 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230594 | orchestrator | 2026-04-17 00:53:52.230600 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-17 00:53:52.230607 | orchestrator | Friday 17 April 2026 00:53:09 +0000 (0:00:00.320) 0:05:22.955 ********** 2026-04-17 00:53:52.230613 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230619 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230625 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230631 | orchestrator | 2026-04-17 00:53:52.230636 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-17 00:53:52.230640 | orchestrator | Friday 17 April 2026 00:53:09 +0000 (0:00:00.313) 0:05:23.269 ********** 2026-04-17 00:53:52.230644 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230647 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230651 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230655 | orchestrator | 2026-04-17 00:53:52.230659 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-17 00:53:52.230662 | orchestrator | Friday 17 April 2026 00:53:10 +0000 (0:00:00.642) 0:05:23.912 ********** 2026-04-17 00:53:52.230666 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230670 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230673 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230677 | orchestrator | 2026-04-17 00:53:52.230681 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-17 00:53:52.230684 | orchestrator | Friday 17 April 2026 00:53:11 +0000 (0:00:00.550) 0:05:24.463 ********** 2026-04-17 00:53:52.230688 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230692 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230696 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230699 | orchestrator | 2026-04-17 00:53:52.230703 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-17 00:53:52.230707 | orchestrator | Friday 17 April 2026 00:53:11 +0000 (0:00:00.655) 0:05:25.119 ********** 2026-04-17 00:53:52.230711 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230714 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230718 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230722 | orchestrator | 2026-04-17 00:53:52.230725 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-17 00:53:52.230729 | orchestrator | Friday 17 April 2026 00:53:12 +0000 (0:00:00.648) 0:05:25.767 ********** 2026-04-17 00:53:52.230733 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230736 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230740 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230744 | orchestrator | 2026-04-17 00:53:52.230747 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-17 00:53:52.230751 | orchestrator | Friday 17 April 2026 00:53:13 +0000 (0:00:00.915) 0:05:26.683 ********** 2026-04-17 00:53:52.230755 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230758 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230765 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230769 | orchestrator | 2026-04-17 00:53:52.230773 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-17 00:53:52.230777 | orchestrator | Friday 17 April 2026 00:53:14 +0000 (0:00:00.920) 0:05:27.604 ********** 2026-04-17 00:53:52.230800 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230804 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230808 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230811 | orchestrator | 2026-04-17 00:53:52.230815 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-17 00:53:52.230819 | orchestrator | Friday 17 April 2026 00:53:15 +0000 (0:00:00.942) 0:05:28.546 ********** 2026-04-17 00:53:52.230823 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.230826 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.230834 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.230838 | orchestrator | 2026-04-17 00:53:52.230841 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-17 00:53:52.230845 | orchestrator | Friday 17 April 2026 00:53:24 +0000 (0:00:09.624) 0:05:38.171 ********** 2026-04-17 00:53:52.230849 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230852 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230856 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230860 | orchestrator | 2026-04-17 00:53:52.230864 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-17 00:53:52.230867 | orchestrator | Friday 17 April 2026 00:53:25 +0000 (0:00:00.945) 0:05:39.116 ********** 2026-04-17 00:53:52.230871 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.230875 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.230879 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.230882 | orchestrator | 2026-04-17 00:53:52.230886 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-17 00:53:52.230894 | orchestrator | Friday 17 April 2026 00:53:33 +0000 (0:00:08.044) 0:05:47.161 ********** 2026-04-17 00:53:52.230898 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.230901 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.230905 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.230909 | orchestrator | 2026-04-17 00:53:52.230913 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-17 00:53:52.230916 | orchestrator | Friday 17 April 2026 00:53:37 +0000 (0:00:03.800) 0:05:50.962 ********** 2026-04-17 00:53:52.230920 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:53:52.230924 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:53:52.230928 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:53:52.230932 | orchestrator | 2026-04-17 00:53:52.230935 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-17 00:53:52.230939 | orchestrator | Friday 17 April 2026 00:53:46 +0000 (0:00:08.806) 0:05:59.768 ********** 2026-04-17 00:53:52.230943 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230947 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230950 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230954 | orchestrator | 2026-04-17 00:53:52.230958 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-17 00:53:52.230961 | orchestrator | Friday 17 April 2026 00:53:47 +0000 (0:00:00.657) 0:06:00.426 ********** 2026-04-17 00:53:52.230965 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230969 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230973 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230976 | orchestrator | 2026-04-17 00:53:52.230980 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-17 00:53:52.230984 | orchestrator | Friday 17 April 2026 00:53:47 +0000 (0:00:00.358) 0:06:00.785 ********** 2026-04-17 00:53:52.230987 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.230991 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.230995 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.230999 | orchestrator | 2026-04-17 00:53:52.231002 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-17 00:53:52.231006 | orchestrator | Friday 17 April 2026 00:53:47 +0000 (0:00:00.343) 0:06:01.128 ********** 2026-04-17 00:53:52.231010 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.231013 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.231017 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.231021 | orchestrator | 2026-04-17 00:53:52.231025 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-17 00:53:52.231028 | orchestrator | Friday 17 April 2026 00:53:48 +0000 (0:00:00.317) 0:06:01.445 ********** 2026-04-17 00:53:52.231032 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.231036 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.231039 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.231047 | orchestrator | 2026-04-17 00:53:52.231051 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-17 00:53:52.231054 | orchestrator | Friday 17 April 2026 00:53:48 +0000 (0:00:00.682) 0:06:02.128 ********** 2026-04-17 00:53:52.231058 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:53:52.231062 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:53:52.231066 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:53:52.231069 | orchestrator | 2026-04-17 00:53:52.231073 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-17 00:53:52.231077 | orchestrator | Friday 17 April 2026 00:53:49 +0000 (0:00:00.346) 0:06:02.475 ********** 2026-04-17 00:53:52.231080 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.231084 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.231088 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.231092 | orchestrator | 2026-04-17 00:53:52.231096 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-17 00:53:52.231099 | orchestrator | Friday 17 April 2026 00:53:49 +0000 (0:00:00.919) 0:06:03.394 ********** 2026-04-17 00:53:52.231103 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:53:52.231107 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:53:52.231110 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:53:52.231114 | orchestrator | 2026-04-17 00:53:52.231118 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:53:52.231122 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 00:53:52.231130 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 00:53:52.231134 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-17 00:53:52.231138 | orchestrator | 2026-04-17 00:53:52.231142 | orchestrator | 2026-04-17 00:53:52.231146 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:53:52.231149 | orchestrator | Friday 17 April 2026 00:53:50 +0000 (0:00:00.932) 0:06:04.326 ********** 2026-04-17 00:53:52.231153 | orchestrator | =============================================================================== 2026-04-17 00:53:52.231157 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.62s 2026-04-17 00:53:52.231161 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.81s 2026-04-17 00:53:52.231165 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.05s 2026-04-17 00:53:52.231168 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.15s 2026-04-17 00:53:52.231172 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.12s 2026-04-17 00:53:52.231176 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.02s 2026-04-17 00:53:52.231179 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.67s 2026-04-17 00:53:52.231183 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.37s 2026-04-17 00:53:52.231189 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.33s 2026-04-17 00:53:52.231194 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.25s 2026-04-17 00:53:52.231197 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.12s 2026-04-17 00:53:52.231201 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.95s 2026-04-17 00:53:52.231205 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.92s 2026-04-17 00:53:52.231209 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.91s 2026-04-17 00:53:52.231212 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.80s 2026-04-17 00:53:52.231216 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.74s 2026-04-17 00:53:52.231223 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.71s 2026-04-17 00:53:52.231227 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.64s 2026-04-17 00:53:52.231230 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.60s 2026-04-17 00:53:52.231234 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.31s 2026-04-17 00:53:52.231238 | orchestrator | 2026-04-17 00:53:52 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:52.231242 | orchestrator | 2026-04-17 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:55.259533 | orchestrator | 2026-04-17 00:53:55 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:55.261695 | orchestrator | 2026-04-17 00:53:55 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:53:55.262084 | orchestrator | 2026-04-17 00:53:55 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:53:55.262128 | orchestrator | 2026-04-17 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:53:58.297403 | orchestrator | 2026-04-17 00:53:58 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:53:58.301957 | orchestrator | 2026-04-17 00:53:58 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:53:58.303342 | orchestrator | 2026-04-17 00:53:58 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:53:58.305292 | orchestrator | 2026-04-17 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:01.337559 | orchestrator | 2026-04-17 00:54:01 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:01.338618 | orchestrator | 2026-04-17 00:54:01 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:01.339334 | orchestrator | 2026-04-17 00:54:01 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:01.342400 | orchestrator | 2026-04-17 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:04.436438 | orchestrator | 2026-04-17 00:54:04 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:04.437869 | orchestrator | 2026-04-17 00:54:04 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:04.439564 | orchestrator | 2026-04-17 00:54:04 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:04.440140 | orchestrator | 2026-04-17 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:07.475565 | orchestrator | 2026-04-17 00:54:07 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:07.475742 | orchestrator | 2026-04-17 00:54:07 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:07.476670 | orchestrator | 2026-04-17 00:54:07 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:07.476822 | orchestrator | 2026-04-17 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:10.508537 | orchestrator | 2026-04-17 00:54:10 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:10.508623 | orchestrator | 2026-04-17 00:54:10 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:10.508634 | orchestrator | 2026-04-17 00:54:10 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:10.508668 | orchestrator | 2026-04-17 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:13.537011 | orchestrator | 2026-04-17 00:54:13 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:13.537090 | orchestrator | 2026-04-17 00:54:13 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:13.537961 | orchestrator | 2026-04-17 00:54:13 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:13.538013 | orchestrator | 2026-04-17 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:16.563508 | orchestrator | 2026-04-17 00:54:16 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:16.564357 | orchestrator | 2026-04-17 00:54:16 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:16.566337 | orchestrator | 2026-04-17 00:54:16 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:16.566373 | orchestrator | 2026-04-17 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:19.600560 | orchestrator | 2026-04-17 00:54:19 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:19.602830 | orchestrator | 2026-04-17 00:54:19 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:19.604625 | orchestrator | 2026-04-17 00:54:19 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:19.604702 | orchestrator | 2026-04-17 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:22.640463 | orchestrator | 2026-04-17 00:54:22 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:22.641097 | orchestrator | 2026-04-17 00:54:22 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:22.641532 | orchestrator | 2026-04-17 00:54:22 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:22.641709 | orchestrator | 2026-04-17 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:25.686277 | orchestrator | 2026-04-17 00:54:25 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:25.686895 | orchestrator | 2026-04-17 00:54:25 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:25.689060 | orchestrator | 2026-04-17 00:54:25 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:25.689112 | orchestrator | 2026-04-17 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:28.737134 | orchestrator | 2026-04-17 00:54:28 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:28.738756 | orchestrator | 2026-04-17 00:54:28 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:28.740924 | orchestrator | 2026-04-17 00:54:28 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:28.740958 | orchestrator | 2026-04-17 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:31.779646 | orchestrator | 2026-04-17 00:54:31 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:31.782632 | orchestrator | 2026-04-17 00:54:31 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:31.784023 | orchestrator | 2026-04-17 00:54:31 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:31.784099 | orchestrator | 2026-04-17 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:34.818152 | orchestrator | 2026-04-17 00:54:34 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:34.820201 | orchestrator | 2026-04-17 00:54:34 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:34.821425 | orchestrator | 2026-04-17 00:54:34 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:34.821470 | orchestrator | 2026-04-17 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:37.863529 | orchestrator | 2026-04-17 00:54:37 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:37.865191 | orchestrator | 2026-04-17 00:54:37 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:37.867825 | orchestrator | 2026-04-17 00:54:37 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:37.867870 | orchestrator | 2026-04-17 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:40.924966 | orchestrator | 2026-04-17 00:54:40 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:40.927259 | orchestrator | 2026-04-17 00:54:40 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:40.932354 | orchestrator | 2026-04-17 00:54:40 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:40.932402 | orchestrator | 2026-04-17 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:43.988257 | orchestrator | 2026-04-17 00:54:43 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:43.989971 | orchestrator | 2026-04-17 00:54:43 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:43.991443 | orchestrator | 2026-04-17 00:54:43 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:43.991484 | orchestrator | 2026-04-17 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:47.039073 | orchestrator | 2026-04-17 00:54:47 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:47.041022 | orchestrator | 2026-04-17 00:54:47 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:47.043020 | orchestrator | 2026-04-17 00:54:47 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:47.043084 | orchestrator | 2026-04-17 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:50.102489 | orchestrator | 2026-04-17 00:54:50 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:50.104140 | orchestrator | 2026-04-17 00:54:50 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:50.106340 | orchestrator | 2026-04-17 00:54:50 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:50.106374 | orchestrator | 2026-04-17 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:53.149525 | orchestrator | 2026-04-17 00:54:53 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:53.150576 | orchestrator | 2026-04-17 00:54:53 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:53.153130 | orchestrator | 2026-04-17 00:54:53 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:53.153201 | orchestrator | 2026-04-17 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:56.204652 | orchestrator | 2026-04-17 00:54:56 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:56.206489 | orchestrator | 2026-04-17 00:54:56 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:56.207991 | orchestrator | 2026-04-17 00:54:56 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:56.208040 | orchestrator | 2026-04-17 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:54:59.257327 | orchestrator | 2026-04-17 00:54:59 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:54:59.258867 | orchestrator | 2026-04-17 00:54:59 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:54:59.260113 | orchestrator | 2026-04-17 00:54:59 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:54:59.260292 | orchestrator | 2026-04-17 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:02.307009 | orchestrator | 2026-04-17 00:55:02 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:02.308732 | orchestrator | 2026-04-17 00:55:02 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:02.310445 | orchestrator | 2026-04-17 00:55:02 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:02.310494 | orchestrator | 2026-04-17 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:05.343723 | orchestrator | 2026-04-17 00:55:05 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:05.344242 | orchestrator | 2026-04-17 00:55:05 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:05.345565 | orchestrator | 2026-04-17 00:55:05 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:05.345600 | orchestrator | 2026-04-17 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:08.386599 | orchestrator | 2026-04-17 00:55:08 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:08.389048 | orchestrator | 2026-04-17 00:55:08 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:08.391395 | orchestrator | 2026-04-17 00:55:08 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:08.391451 | orchestrator | 2026-04-17 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:11.437510 | orchestrator | 2026-04-17 00:55:11 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:11.439538 | orchestrator | 2026-04-17 00:55:11 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:11.443191 | orchestrator | 2026-04-17 00:55:11 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:11.443289 | orchestrator | 2026-04-17 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:14.496053 | orchestrator | 2026-04-17 00:55:14 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:14.500401 | orchestrator | 2026-04-17 00:55:14 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:14.501873 | orchestrator | 2026-04-17 00:55:14 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:14.501931 | orchestrator | 2026-04-17 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:17.544931 | orchestrator | 2026-04-17 00:55:17 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:17.548059 | orchestrator | 2026-04-17 00:55:17 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:17.549920 | orchestrator | 2026-04-17 00:55:17 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:17.549965 | orchestrator | 2026-04-17 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:20.592692 | orchestrator | 2026-04-17 00:55:20 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:20.595095 | orchestrator | 2026-04-17 00:55:20 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:20.596576 | orchestrator | 2026-04-17 00:55:20 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:20.597066 | orchestrator | 2026-04-17 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:23.657416 | orchestrator | 2026-04-17 00:55:23 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:23.659684 | orchestrator | 2026-04-17 00:55:23 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:23.663237 | orchestrator | 2026-04-17 00:55:23 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:23.664391 | orchestrator | 2026-04-17 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:26.725489 | orchestrator | 2026-04-17 00:55:26 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:26.728418 | orchestrator | 2026-04-17 00:55:26 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:26.730484 | orchestrator | 2026-04-17 00:55:26 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:26.730561 | orchestrator | 2026-04-17 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:29.775246 | orchestrator | 2026-04-17 00:55:29 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:29.777736 | orchestrator | 2026-04-17 00:55:29 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:29.779192 | orchestrator | 2026-04-17 00:55:29 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:29.779246 | orchestrator | 2026-04-17 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:32.826875 | orchestrator | 2026-04-17 00:55:32 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:32.828865 | orchestrator | 2026-04-17 00:55:32 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:32.830458 | orchestrator | 2026-04-17 00:55:32 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:32.830511 | orchestrator | 2026-04-17 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:35.871834 | orchestrator | 2026-04-17 00:55:35 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state STARTED 2026-04-17 00:55:35.874554 | orchestrator | 2026-04-17 00:55:35 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:35.878979 | orchestrator | 2026-04-17 00:55:35 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:35.879512 | orchestrator | 2026-04-17 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:38.928616 | orchestrator | 2026-04-17 00:55:38 | INFO  | Task bab2767c-cde7-46f7-b455-da2165b39c23 is in state SUCCESS 2026-04-17 00:55:38.930867 | orchestrator | 2026-04-17 00:55:38.930914 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 00:55:38.930931 | orchestrator | 2.16.14 2026-04-17 00:55:38.930936 | orchestrator | 2026-04-17 00:55:38.930939 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-17 00:55:38.930943 | orchestrator | 2026-04-17 00:55:38.930946 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 00:55:38.930949 | orchestrator | Friday 17 April 2026 00:45:28 +0000 (0:00:00.678) 0:00:00.678 ********** 2026-04-17 00:55:38.930953 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.930957 | orchestrator | 2026-04-17 00:55:38.930960 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 00:55:38.930963 | orchestrator | Friday 17 April 2026 00:45:29 +0000 (0:00:01.442) 0:00:02.121 ********** 2026-04-17 00:55:38.930966 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.930969 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.930972 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.930975 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.930978 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.930981 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.930985 | orchestrator | 2026-04-17 00:55:38.930988 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 00:55:38.930991 | orchestrator | Friday 17 April 2026 00:45:31 +0000 (0:00:01.722) 0:00:03.843 ********** 2026-04-17 00:55:38.930994 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.930997 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931000 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931003 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931006 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931009 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931012 | orchestrator | 2026-04-17 00:55:38.931015 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 00:55:38.931018 | orchestrator | Friday 17 April 2026 00:45:32 +0000 (0:00:00.631) 0:00:04.474 ********** 2026-04-17 00:55:38.931021 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931024 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931027 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931030 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931033 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931036 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931039 | orchestrator | 2026-04-17 00:55:38.931042 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 00:55:38.931045 | orchestrator | Friday 17 April 2026 00:45:33 +0000 (0:00:01.188) 0:00:05.662 ********** 2026-04-17 00:55:38.931048 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931051 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931055 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931058 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931061 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931064 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931067 | orchestrator | 2026-04-17 00:55:38.931070 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 00:55:38.931073 | orchestrator | Friday 17 April 2026 00:45:34 +0000 (0:00:01.083) 0:00:06.746 ********** 2026-04-17 00:55:38.931076 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931079 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931082 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931085 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931088 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931092 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931095 | orchestrator | 2026-04-17 00:55:38.931098 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 00:55:38.931101 | orchestrator | Friday 17 April 2026 00:45:35 +0000 (0:00:00.945) 0:00:07.692 ********** 2026-04-17 00:55:38.931104 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931107 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931119 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931123 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931174 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931177 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931180 | orchestrator | 2026-04-17 00:55:38.931184 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 00:55:38.931187 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:00.829) 0:00:08.521 ********** 2026-04-17 00:55:38.931190 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931214 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931219 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931222 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931225 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931228 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931231 | orchestrator | 2026-04-17 00:55:38.931234 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 00:55:38.931267 | orchestrator | Friday 17 April 2026 00:45:36 +0000 (0:00:00.638) 0:00:09.160 ********** 2026-04-17 00:55:38.931270 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931273 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931276 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931279 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931282 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931285 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931288 | orchestrator | 2026-04-17 00:55:38.931291 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 00:55:38.931295 | orchestrator | Friday 17 April 2026 00:45:37 +0000 (0:00:00.764) 0:00:09.925 ********** 2026-04-17 00:55:38.931298 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:55:38.931301 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.931304 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.931307 | orchestrator | 2026-04-17 00:55:38.931310 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 00:55:38.931313 | orchestrator | Friday 17 April 2026 00:45:38 +0000 (0:00:00.911) 0:00:10.836 ********** 2026-04-17 00:55:38.931316 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931319 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931322 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931336 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931341 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931346 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931351 | orchestrator | 2026-04-17 00:55:38.931356 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 00:55:38.931361 | orchestrator | Friday 17 April 2026 00:45:40 +0000 (0:00:01.897) 0:00:12.734 ********** 2026-04-17 00:55:38.931366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:55:38.931371 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.931376 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.931381 | orchestrator | 2026-04-17 00:55:38.931386 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 00:55:38.931391 | orchestrator | Friday 17 April 2026 00:45:43 +0000 (0:00:03.500) 0:00:16.234 ********** 2026-04-17 00:55:38.931396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 00:55:38.931401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 00:55:38.931406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 00:55:38.931411 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931576 | orchestrator | 2026-04-17 00:55:38.931582 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 00:55:38.931590 | orchestrator | Friday 17 April 2026 00:45:44 +0000 (0:00:00.716) 0:00:16.951 ********** 2026-04-17 00:55:38.931595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931622 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931625 | orchestrator | 2026-04-17 00:55:38.931628 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 00:55:38.931631 | orchestrator | Friday 17 April 2026 00:45:45 +0000 (0:00:01.326) 0:00:18.277 ********** 2026-04-17 00:55:38.931636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931650 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931653 | orchestrator | 2026-04-17 00:55:38.931656 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 00:55:38.931659 | orchestrator | Friday 17 April 2026 00:45:46 +0000 (0:00:00.290) 0:00:18.568 ********** 2026-04-17 00:55:38.931669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 00:45:41.796408', 'end': '2026-04-17 00:45:41.892584', 'delta': '0:00:00.096176', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 00:45:42.831107', 'end': '2026-04-17 00:45:42.935602', 'delta': '0:00:00.104495', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 00:45:43.443511', 'end': '2026-04-17 00:45:43.528622', 'delta': '0:00:00.085111', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.931682 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931685 | orchestrator | 2026-04-17 00:55:38.931688 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 00:55:38.931692 | orchestrator | Friday 17 April 2026 00:45:46 +0000 (0:00:00.320) 0:00:18.888 ********** 2026-04-17 00:55:38.931695 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.931698 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.931701 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.931704 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.931707 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.931710 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.931713 | orchestrator | 2026-04-17 00:55:38.931716 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 00:55:38.931719 | orchestrator | Friday 17 April 2026 00:45:48 +0000 (0:00:01.943) 0:00:20.831 ********** 2026-04-17 00:55:38.931723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.931726 | orchestrator | 2026-04-17 00:55:38.931729 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 00:55:38.931732 | orchestrator | Friday 17 April 2026 00:45:49 +0000 (0:00:00.777) 0:00:21.609 ********** 2026-04-17 00:55:38.931735 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931738 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931741 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931746 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931749 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931752 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931755 | orchestrator | 2026-04-17 00:55:38.931758 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 00:55:38.931761 | orchestrator | Friday 17 April 2026 00:45:50 +0000 (0:00:01.068) 0:00:22.678 ********** 2026-04-17 00:55:38.931764 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931767 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931770 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931773 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931776 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931780 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931783 | orchestrator | 2026-04-17 00:55:38.931786 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 00:55:38.931789 | orchestrator | Friday 17 April 2026 00:45:51 +0000 (0:00:01.074) 0:00:23.752 ********** 2026-04-17 00:55:38.931792 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931795 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931798 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931801 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931804 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931807 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931813 | orchestrator | 2026-04-17 00:55:38.931816 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 00:55:38.931819 | orchestrator | Friday 17 April 2026 00:45:52 +0000 (0:00:00.817) 0:00:24.570 ********** 2026-04-17 00:55:38.931822 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931825 | orchestrator | 2026-04-17 00:55:38.931829 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 00:55:38.931832 | orchestrator | Friday 17 April 2026 00:45:52 +0000 (0:00:00.110) 0:00:24.680 ********** 2026-04-17 00:55:38.931835 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931838 | orchestrator | 2026-04-17 00:55:38.931841 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 00:55:38.931844 | orchestrator | Friday 17 April 2026 00:45:52 +0000 (0:00:00.221) 0:00:24.901 ********** 2026-04-17 00:55:38.931847 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931850 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931853 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931859 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931862 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931865 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931868 | orchestrator | 2026-04-17 00:55:38.931871 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 00:55:38.931874 | orchestrator | Friday 17 April 2026 00:45:53 +0000 (0:00:00.740) 0:00:25.642 ********** 2026-04-17 00:55:38.931878 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931881 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931884 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931890 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931893 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931896 | orchestrator | 2026-04-17 00:55:38.931899 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 00:55:38.931902 | orchestrator | Friday 17 April 2026 00:45:54 +0000 (0:00:00.964) 0:00:26.606 ********** 2026-04-17 00:55:38.931905 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931908 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931911 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931914 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931917 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931920 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931923 | orchestrator | 2026-04-17 00:55:38.931926 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 00:55:38.931929 | orchestrator | Friday 17 April 2026 00:45:55 +0000 (0:00:00.762) 0:00:27.369 ********** 2026-04-17 00:55:38.931933 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931936 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931939 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931942 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931945 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.931948 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.931951 | orchestrator | 2026-04-17 00:55:38.931954 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 00:55:38.931957 | orchestrator | Friday 17 April 2026 00:45:55 +0000 (0:00:00.627) 0:00:27.996 ********** 2026-04-17 00:55:38.931960 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.931963 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.931990 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.931994 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.931997 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.932000 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.932003 | orchestrator | 2026-04-17 00:55:38.932007 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 00:55:38.932010 | orchestrator | Friday 17 April 2026 00:45:56 +0000 (0:00:00.791) 0:00:28.788 ********** 2026-04-17 00:55:38.932016 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.932019 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.932174 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.932184 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.932187 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.932190 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.932193 | orchestrator | 2026-04-17 00:55:38.932197 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 00:55:38.932200 | orchestrator | Friday 17 April 2026 00:45:57 +0000 (0:00:00.722) 0:00:29.510 ********** 2026-04-17 00:55:38.932203 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.932207 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.932210 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.932213 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.932216 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.932219 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.932222 | orchestrator | 2026-04-17 00:55:38.932226 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 00:55:38.932231 | orchestrator | Friday 17 April 2026 00:45:57 +0000 (0:00:00.621) 0:00:30.131 ********** 2026-04-17 00:55:38.932235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e', 'dm-uuid-LVM-g9X6l0qwDIZWCpRWNEx1zkSl2Za2dKIeStxmKKanMhqLvtUuPaP0LfahY1QRB2m1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db', 'dm-uuid-LVM-1X6Uih7qPguWyuCZMmyrP2EbSIAqMcGMBcGbolZV6Jf1sLG9qibnfQkSm63AWGNe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7', 'dm-uuid-LVM-0Utkg46oKijwoDG46BuLcXixJM5j7w0mUYRqO3NpqwwYZ54MvhNbLP1SVpKLJwT4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb', 'dm-uuid-LVM-GAlEacEbi1CwcOISAVKhXFrtTEYp6ye09GTUmP11e6XNES9wmceSUAE8bF5Ro1JF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fisNcD-or7H-PZQo-LaTD-fY2d-g4Xo-sXbrTK', 'scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20', 'scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198', 'dm-uuid-LVM-nCYaAaVdToTEuRn1zAv5nxcwpgFNSw2rlf27CKxFeSt93SMWQNXTDv6H9rfbdmbg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Kt5TXT-Lx50-maGB-s3l4-DCg1-ASrb-1tDoJY', 'scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128', 'scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe', 'scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64', 'dm-uuid-LVM-cY8ffF6iH2rYYxBC9cIWQg2oZ3Bddr1zv6dOY66N7vdlGXs1o6D2tVpbz91AILgU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932445 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.932448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nfWxIo-AfkD-xidi-gWBA-bejh-6Mm2-r5fz5c', 'scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363', 'scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQNsKy-QiL1-lcEN-TCk1-59lB-PdnC-LfU0c2', 'scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06', 'scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b', 'scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932855 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.932861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-colrFE-4Hk0-qtHQ-x927-2e4m-YVAL-XO7LZ6', 'scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7', 'scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFHCby-2alW-5GKP-zhKf-5k5e-WAHr-CBnv39', 'scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c', 'scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932887 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.932890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part1', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part14', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part15', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part16', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932940 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.932945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9', 'scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.932967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932975 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.932980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.932997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.933003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.933009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.933013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:55:38.933020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part1', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part14', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part15', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part16', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.933034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:55:38.933038 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933044 | orchestrator | 2026-04-17 00:55:38.933050 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 00:55:38.933056 | orchestrator | Friday 17 April 2026 00:45:59 +0000 (0:00:01.425) 0:00:31.557 ********** 2026-04-17 00:55:38.933062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e', 'dm-uuid-LVM-g9X6l0qwDIZWCpRWNEx1zkSl2Za2dKIeStxmKKanMhqLvtUuPaP0LfahY1QRB2m1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db', 'dm-uuid-LVM-1X6Uih7qPguWyuCZMmyrP2EbSIAqMcGMBcGbolZV6Jf1sLG9qibnfQkSm63AWGNe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7', 'dm-uuid-LVM-0Utkg46oKijwoDG46BuLcXixJM5j7w0mUYRqO3NpqwwYZ54MvhNbLP1SVpKLJwT4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb', 'dm-uuid-LVM-GAlEacEbi1CwcOISAVKhXFrtTEYp6ye09GTUmP11e6XNES9wmceSUAE8bF5Ro1JF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-colrFE-4Hk0-qtHQ-x927-2e4m-YVAL-XO7LZ6', 'scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7', 'scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFHCby-2alW-5GKP-zhKf-5k5e-WAHr-CBnv39', 'scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c', 'scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933189 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9', 'scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fisNcD-or7H-PZQo-LaTD-fY2d-g4Xo-sXbrTK', 'scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20', 'scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198', 'dm-uuid-LVM-nCYaAaVdToTEuRn1zAv5nxcwpgFNSw2rlf27CKxFeSt93SMWQNXTDv6H9rfbdmbg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Kt5TXT-Lx50-maGB-s3l4-DCg1-ASrb-1tDoJY', 'scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128', 'scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64', 'dm-uuid-LVM-cY8ffF6iH2rYYxBC9cIWQg2oZ3Bddr1zv6dOY66N7vdlGXs1o6D2tVpbz91AILgU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe', 'scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933268 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933322 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.933328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933395 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933399 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933420 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933426 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nfWxIo-AfkD-xidi-gWBA-bejh-6Mm2-r5fz5c', 'scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363', 'scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933430 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQNsKy-QiL1-lcEN-TCk1-59lB-PdnC-LfU0c2', 'scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06', 'scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933440 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b', 'scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933456 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part1', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part14', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part15', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part16', 'scsi-SQEMU_QEMU_HARDDISK_935f922d-d22c-4ae3-8d21-594fc8e3804c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933475 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.933479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933482 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.933485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933490 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.933493 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933496 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933503 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933535 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933544 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933552 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933557 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933560 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933721 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933731 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933738 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933746 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part1', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part14', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part15', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part16', 'scsi-SQEMU_QEMU_HARDDISK_016aad87-77cf-4f84-939d-7c9b8b9ffacf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933750 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a612a59-a293-42e1-94a9-7d2382f6f1f5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933758 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933761 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933765 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:55:38.933768 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.933771 | orchestrator | 2026-04-17 00:55:38.933776 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 00:55:38.933779 | orchestrator | Friday 17 April 2026 00:46:01 +0000 (0:00:02.422) 0:00:33.979 ********** 2026-04-17 00:55:38.933783 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.933786 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.933789 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.933792 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.933795 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.933798 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.933804 | orchestrator | 2026-04-17 00:55:38.933807 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 00:55:38.933810 | orchestrator | Friday 17 April 2026 00:46:04 +0000 (0:00:02.483) 0:00:36.463 ********** 2026-04-17 00:55:38.933813 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.933816 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.933819 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.933822 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.933825 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.933828 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.933831 | orchestrator | 2026-04-17 00:55:38.933834 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 00:55:38.933838 | orchestrator | Friday 17 April 2026 00:46:05 +0000 (0:00:00.944) 0:00:37.407 ********** 2026-04-17 00:55:38.933841 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.933844 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.933847 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.933850 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.933853 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.933856 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933859 | orchestrator | 2026-04-17 00:55:38.933862 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 00:55:38.933866 | orchestrator | Friday 17 April 2026 00:46:06 +0000 (0:00:01.450) 0:00:38.858 ********** 2026-04-17 00:55:38.933869 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.933872 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.933875 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.933878 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.933881 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.933884 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933887 | orchestrator | 2026-04-17 00:55:38.933890 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 00:55:38.933893 | orchestrator | Friday 17 April 2026 00:46:07 +0000 (0:00:00.843) 0:00:39.701 ********** 2026-04-17 00:55:38.933896 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.933899 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.933902 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.933905 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.933908 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.933911 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933915 | orchestrator | 2026-04-17 00:55:38.933918 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 00:55:38.933921 | orchestrator | Friday 17 April 2026 00:46:08 +0000 (0:00:00.868) 0:00:40.569 ********** 2026-04-17 00:55:38.933924 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.933927 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.933930 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.933933 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.933936 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.933939 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.933942 | orchestrator | 2026-04-17 00:55:38.933945 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 00:55:38.933948 | orchestrator | Friday 17 April 2026 00:46:09 +0000 (0:00:00.787) 0:00:41.357 ********** 2026-04-17 00:55:38.933952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 00:55:38.933955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 00:55:38.933959 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 00:55:38.933962 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 00:55:38.933972 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 00:55:38.933980 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 00:55:38.933983 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-17 00:55:38.933988 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 00:55:38.933991 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 00:55:38.933994 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 00:55:38.933997 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 00:55:38.934000 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 00:55:38.934003 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-17 00:55:38.934006 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-17 00:55:38.934009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 00:55:38.934041 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-17 00:55:38.934044 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-17 00:55:38.934048 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-17 00:55:38.934051 | orchestrator | 2026-04-17 00:55:38.934054 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 00:55:38.934057 | orchestrator | Friday 17 April 2026 00:46:12 +0000 (0:00:03.364) 0:00:44.722 ********** 2026-04-17 00:55:38.934060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 00:55:38.934063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 00:55:38.934066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 00:55:38.934069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 00:55:38.934072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 00:55:38.934075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 00:55:38.934078 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 00:55:38.934082 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934087 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 00:55:38.934090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 00:55:38.934093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:55:38.934096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:55:38.934099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:55:38.934102 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934105 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-17 00:55:38.934109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-17 00:55:38.934112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-17 00:55:38.934115 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934118 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934138 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934142 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-17 00:55:38.934145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-17 00:55:38.934148 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-17 00:55:38.934151 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934155 | orchestrator | 2026-04-17 00:55:38.934158 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 00:55:38.934161 | orchestrator | Friday 17 April 2026 00:46:13 +0000 (0:00:00.709) 0:00:45.431 ********** 2026-04-17 00:55:38.934164 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934167 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934170 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934173 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.934177 | orchestrator | 2026-04-17 00:55:38.934180 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 00:55:38.934186 | orchestrator | Friday 17 April 2026 00:46:14 +0000 (0:00:01.482) 0:00:46.913 ********** 2026-04-17 00:55:38.934189 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934192 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934195 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934198 | orchestrator | 2026-04-17 00:55:38.934201 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 00:55:38.934204 | orchestrator | Friday 17 April 2026 00:46:15 +0000 (0:00:00.438) 0:00:47.351 ********** 2026-04-17 00:55:38.934208 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934211 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934214 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934217 | orchestrator | 2026-04-17 00:55:38.934220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 00:55:38.934223 | orchestrator | Friday 17 April 2026 00:46:15 +0000 (0:00:00.752) 0:00:48.103 ********** 2026-04-17 00:55:38.934226 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934229 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934232 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934235 | orchestrator | 2026-04-17 00:55:38.934238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 00:55:38.934241 | orchestrator | Friday 17 April 2026 00:46:16 +0000 (0:00:00.750) 0:00:48.854 ********** 2026-04-17 00:55:38.934244 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.934247 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.934251 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.934254 | orchestrator | 2026-04-17 00:55:38.934257 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 00:55:38.934262 | orchestrator | Friday 17 April 2026 00:46:17 +0000 (0:00:00.777) 0:00:49.632 ********** 2026-04-17 00:55:38.934265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.934268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.934271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.934274 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934277 | orchestrator | 2026-04-17 00:55:38.934280 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 00:55:38.934283 | orchestrator | Friday 17 April 2026 00:46:17 +0000 (0:00:00.552) 0:00:50.185 ********** 2026-04-17 00:55:38.934287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.934421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.934424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.934427 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934457 | orchestrator | 2026-04-17 00:55:38.934460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 00:55:38.934463 | orchestrator | Friday 17 April 2026 00:46:18 +0000 (0:00:00.736) 0:00:50.921 ********** 2026-04-17 00:55:38.934467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.934470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.934473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.934476 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934479 | orchestrator | 2026-04-17 00:55:38.934482 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 00:55:38.934485 | orchestrator | Friday 17 April 2026 00:46:19 +0000 (0:00:00.643) 0:00:51.565 ********** 2026-04-17 00:55:38.934488 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.934491 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.934494 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.934497 | orchestrator | 2026-04-17 00:55:38.934500 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 00:55:38.934503 | orchestrator | Friday 17 April 2026 00:46:19 +0000 (0:00:00.556) 0:00:52.121 ********** 2026-04-17 00:55:38.934509 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 00:55:38.934513 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 00:55:38.934518 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 00:55:38.934522 | orchestrator | 2026-04-17 00:55:38.934525 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 00:55:38.934528 | orchestrator | Friday 17 April 2026 00:46:21 +0000 (0:00:01.440) 0:00:53.562 ********** 2026-04-17 00:55:38.934531 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:55:38.934534 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.934537 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.934541 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 00:55:38.934544 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 00:55:38.934547 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 00:55:38.934550 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 00:55:38.934553 | orchestrator | 2026-04-17 00:55:38.934556 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 00:55:38.934559 | orchestrator | Friday 17 April 2026 00:46:22 +0000 (0:00:01.013) 0:00:54.575 ********** 2026-04-17 00:55:38.934562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:55:38.934565 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.934568 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.934571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 00:55:38.934575 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 00:55:38.934578 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 00:55:38.934581 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 00:55:38.934584 | orchestrator | 2026-04-17 00:55:38.934587 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.934590 | orchestrator | Friday 17 April 2026 00:46:24 +0000 (0:00:02.448) 0:00:57.024 ********** 2026-04-17 00:55:38.934594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.934627 | orchestrator | 2026-04-17 00:55:38.934632 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.934637 | orchestrator | Friday 17 April 2026 00:46:26 +0000 (0:00:01.929) 0:00:58.953 ********** 2026-04-17 00:55:38.934645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.934652 | orchestrator | 2026-04-17 00:55:38.934658 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.934662 | orchestrator | Friday 17 April 2026 00:46:28 +0000 (0:00:01.423) 0:01:00.377 ********** 2026-04-17 00:55:38.934668 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934674 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934678 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934686 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.934692 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.934697 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.934703 | orchestrator | 2026-04-17 00:55:38.934708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.934714 | orchestrator | Friday 17 April 2026 00:46:29 +0000 (0:00:01.835) 0:01:02.212 ********** 2026-04-17 00:55:38.934725 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934730 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934736 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.934742 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.934747 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934750 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.934754 | orchestrator | 2026-04-17 00:55:38.934757 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.934760 | orchestrator | Friday 17 April 2026 00:46:30 +0000 (0:00:01.059) 0:01:03.272 ********** 2026-04-17 00:55:38.934763 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.934766 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.934769 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934772 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934775 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.934778 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934781 | orchestrator | 2026-04-17 00:55:38.934784 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.934787 | orchestrator | Friday 17 April 2026 00:46:32 +0000 (0:00:01.513) 0:01:04.785 ********** 2026-04-17 00:55:38.934790 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.934793 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934796 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934799 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.934802 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934805 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.934808 | orchestrator | 2026-04-17 00:55:38.934811 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.934814 | orchestrator | Friday 17 April 2026 00:46:34 +0000 (0:00:01.703) 0:01:06.489 ********** 2026-04-17 00:55:38.934817 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934820 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934823 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934826 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.934829 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.934836 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.934839 | orchestrator | 2026-04-17 00:55:38.934842 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.934890 | orchestrator | Friday 17 April 2026 00:46:35 +0000 (0:00:01.536) 0:01:08.026 ********** 2026-04-17 00:55:38.934894 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.934898 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.934901 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.934904 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.934907 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.934910 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.934929 | orchestrator | 2026-04-17 00:55:38.934932 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.934935 | orchestrator | Friday 17 April 2026 00:46:36 +0000 (0:00:00.556) 0:01:08.582 ********** 2026-04-17 00:55:38.935340 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935349 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935352 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935356 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935359 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935363 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935366 | orchestrator | 2026-04-17 00:55:38.935370 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.935374 | orchestrator | Friday 17 April 2026 00:46:36 +0000 (0:00:00.595) 0:01:09.178 ********** 2026-04-17 00:55:38.935377 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935381 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935385 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935388 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.935396 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.935399 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.935403 | orchestrator | 2026-04-17 00:55:38.935406 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.935410 | orchestrator | Friday 17 April 2026 00:46:38 +0000 (0:00:01.231) 0:01:10.410 ********** 2026-04-17 00:55:38.935413 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935417 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935420 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935423 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.935426 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.935429 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.935432 | orchestrator | 2026-04-17 00:55:38.935435 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.935438 | orchestrator | Friday 17 April 2026 00:46:39 +0000 (0:00:01.128) 0:01:11.539 ********** 2026-04-17 00:55:38.935441 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935476 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935479 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935482 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935485 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935488 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935491 | orchestrator | 2026-04-17 00:55:38.935494 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.935497 | orchestrator | Friday 17 April 2026 00:46:40 +0000 (0:00:00.833) 0:01:12.372 ********** 2026-04-17 00:55:38.935500 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935503 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935506 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935509 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.935512 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.935515 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.935518 | orchestrator | 2026-04-17 00:55:38.935522 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.935525 | orchestrator | Friday 17 April 2026 00:46:40 +0000 (0:00:00.532) 0:01:12.904 ********** 2026-04-17 00:55:38.935528 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935531 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935552 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935556 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935559 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935562 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935565 | orchestrator | 2026-04-17 00:55:38.935568 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.935572 | orchestrator | Friday 17 April 2026 00:46:41 +0000 (0:00:00.681) 0:01:13.586 ********** 2026-04-17 00:55:38.935575 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935580 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935585 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935590 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935595 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935627 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935632 | orchestrator | 2026-04-17 00:55:38.935637 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.935641 | orchestrator | Friday 17 April 2026 00:46:41 +0000 (0:00:00.503) 0:01:14.090 ********** 2026-04-17 00:55:38.935646 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935651 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935656 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935661 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935665 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935670 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935675 | orchestrator | 2026-04-17 00:55:38.935680 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.935693 | orchestrator | Friday 17 April 2026 00:46:42 +0000 (0:00:00.616) 0:01:14.706 ********** 2026-04-17 00:55:38.935698 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935703 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935707 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935711 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935714 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935717 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935816 | orchestrator | 2026-04-17 00:55:38.935825 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.935831 | orchestrator | Friday 17 April 2026 00:46:42 +0000 (0:00:00.495) 0:01:15.201 ********** 2026-04-17 00:55:38.935836 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935842 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935848 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935853 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.935875 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.935880 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.935885 | orchestrator | 2026-04-17 00:55:38.935890 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.935898 | orchestrator | Friday 17 April 2026 00:46:43 +0000 (0:00:00.644) 0:01:15.846 ********** 2026-04-17 00:55:38.935904 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.935909 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.935913 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.935918 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.935922 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.935927 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.935931 | orchestrator | 2026-04-17 00:55:38.935936 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.935940 | orchestrator | Friday 17 April 2026 00:46:44 +0000 (0:00:00.521) 0:01:16.367 ********** 2026-04-17 00:55:38.935944 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935949 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935954 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.935959 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.935963 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.935968 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.935973 | orchestrator | 2026-04-17 00:55:38.935978 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.935984 | orchestrator | Friday 17 April 2026 00:46:44 +0000 (0:00:00.699) 0:01:17.067 ********** 2026-04-17 00:55:38.935988 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.935993 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.935999 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.936002 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.936005 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.936008 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.936011 | orchestrator | 2026-04-17 00:55:38.936014 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-17 00:55:38.936017 | orchestrator | Friday 17 April 2026 00:46:45 +0000 (0:00:01.025) 0:01:18.092 ********** 2026-04-17 00:55:38.936021 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.936024 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.936027 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.936030 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.936033 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.936036 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.936039 | orchestrator | 2026-04-17 00:55:38.936042 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-17 00:55:38.936045 | orchestrator | Friday 17 April 2026 00:46:47 +0000 (0:00:01.589) 0:01:19.682 ********** 2026-04-17 00:55:38.936048 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.936051 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.936054 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.936062 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.936065 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.936068 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.936072 | orchestrator | 2026-04-17 00:55:38.936075 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-17 00:55:38.936078 | orchestrator | Friday 17 April 2026 00:46:49 +0000 (0:00:02.360) 0:01:22.043 ********** 2026-04-17 00:55:38.936081 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.936084 | orchestrator | 2026-04-17 00:55:38.936088 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-17 00:55:38.936091 | orchestrator | Friday 17 April 2026 00:46:50 +0000 (0:00:01.203) 0:01:23.246 ********** 2026-04-17 00:55:38.936094 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936097 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936103 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936106 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936109 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936112 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936115 | orchestrator | 2026-04-17 00:55:38.936118 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-17 00:55:38.936121 | orchestrator | Friday 17 April 2026 00:46:51 +0000 (0:00:00.511) 0:01:23.758 ********** 2026-04-17 00:55:38.936125 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936128 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936131 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936137 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936140 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936143 | orchestrator | 2026-04-17 00:55:38.936146 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-17 00:55:38.936149 | orchestrator | Friday 17 April 2026 00:46:52 +0000 (0:00:00.745) 0:01:24.503 ********** 2026-04-17 00:55:38.936152 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936155 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936158 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936161 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936164 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936167 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-17 00:55:38.936170 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936173 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936176 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936180 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936194 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936198 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-17 00:55:38.936201 | orchestrator | 2026-04-17 00:55:38.936204 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-17 00:55:38.936207 | orchestrator | Friday 17 April 2026 00:46:53 +0000 (0:00:01.467) 0:01:25.971 ********** 2026-04-17 00:55:38.936210 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.936213 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.936216 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.936222 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.936225 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.936228 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.936231 | orchestrator | 2026-04-17 00:55:38.936234 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-17 00:55:38.936237 | orchestrator | Friday 17 April 2026 00:46:54 +0000 (0:00:01.011) 0:01:26.982 ********** 2026-04-17 00:55:38.936240 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936243 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936246 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936252 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936255 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936258 | orchestrator | 2026-04-17 00:55:38.936261 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-17 00:55:38.936264 | orchestrator | Friday 17 April 2026 00:46:55 +0000 (0:00:00.517) 0:01:27.500 ********** 2026-04-17 00:55:38.936267 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936270 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936273 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936276 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936279 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936282 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936285 | orchestrator | 2026-04-17 00:55:38.936288 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-17 00:55:38.936291 | orchestrator | Friday 17 April 2026 00:46:55 +0000 (0:00:00.619) 0:01:28.120 ********** 2026-04-17 00:55:38.936294 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936297 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936300 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936303 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936306 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936309 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936312 | orchestrator | 2026-04-17 00:55:38.936315 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-17 00:55:38.936318 | orchestrator | Friday 17 April 2026 00:46:56 +0000 (0:00:00.490) 0:01:28.611 ********** 2026-04-17 00:55:38.936322 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.936325 | orchestrator | 2026-04-17 00:55:38.936328 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-17 00:55:38.936331 | orchestrator | Friday 17 April 2026 00:46:57 +0000 (0:00:01.005) 0:01:29.616 ********** 2026-04-17 00:55:38.936334 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.936337 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.936340 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.936343 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.936346 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.936349 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.936352 | orchestrator | 2026-04-17 00:55:38.936355 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-17 00:55:38.936360 | orchestrator | Friday 17 April 2026 00:48:00 +0000 (0:01:03.189) 0:02:32.806 ********** 2026-04-17 00:55:38.936363 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936366 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936369 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936375 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936378 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936384 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936387 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936390 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936393 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936396 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936399 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936402 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936405 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936408 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936411 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936414 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936417 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936420 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936423 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936434 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-17 00:55:38.936437 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-17 00:55:38.936440 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-17 00:55:38.936443 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936446 | orchestrator | 2026-04-17 00:55:38.936449 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-17 00:55:38.936452 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.734) 0:02:33.541 ********** 2026-04-17 00:55:38.936455 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936459 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936462 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936466 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936469 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936473 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936476 | orchestrator | 2026-04-17 00:55:38.936480 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-17 00:55:38.936483 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.650) 0:02:34.191 ********** 2026-04-17 00:55:38.936487 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936490 | orchestrator | 2026-04-17 00:55:38.936493 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-17 00:55:38.936497 | orchestrator | Friday 17 April 2026 00:48:01 +0000 (0:00:00.120) 0:02:34.312 ********** 2026-04-17 00:55:38.936540 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936545 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936548 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936552 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936555 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936559 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936562 | orchestrator | 2026-04-17 00:55:38.936566 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-17 00:55:38.936569 | orchestrator | Friday 17 April 2026 00:48:02 +0000 (0:00:00.603) 0:02:34.916 ********** 2026-04-17 00:55:38.936573 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936576 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936580 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936583 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936586 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936590 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936594 | orchestrator | 2026-04-17 00:55:38.936612 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-17 00:55:38.936618 | orchestrator | Friday 17 April 2026 00:48:03 +0000 (0:00:00.816) 0:02:35.732 ********** 2026-04-17 00:55:38.936624 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936627 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936630 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936633 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936636 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936639 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936642 | orchestrator | 2026-04-17 00:55:38.936645 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-17 00:55:38.936648 | orchestrator | Friday 17 April 2026 00:48:04 +0000 (0:00:00.755) 0:02:36.487 ********** 2026-04-17 00:55:38.936651 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.936654 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.936657 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.936660 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.936663 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.936666 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.936669 | orchestrator | 2026-04-17 00:55:38.936672 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-17 00:55:38.936675 | orchestrator | Friday 17 April 2026 00:48:08 +0000 (0:00:03.871) 0:02:40.360 ********** 2026-04-17 00:55:38.936681 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.936684 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.936687 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.936690 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.936693 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.936696 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.936699 | orchestrator | 2026-04-17 00:55:38.936702 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-17 00:55:38.936705 | orchestrator | Friday 17 April 2026 00:48:08 +0000 (0:00:00.688) 0:02:41.049 ********** 2026-04-17 00:55:38.936708 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.936712 | orchestrator | 2026-04-17 00:55:38.936715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-17 00:55:38.936718 | orchestrator | Friday 17 April 2026 00:48:09 +0000 (0:00:01.266) 0:02:42.315 ********** 2026-04-17 00:55:38.936721 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936724 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936727 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936730 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936733 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936736 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936739 | orchestrator | 2026-04-17 00:55:38.936742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-17 00:55:38.936745 | orchestrator | Friday 17 April 2026 00:48:10 +0000 (0:00:00.736) 0:02:43.052 ********** 2026-04-17 00:55:38.936748 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936751 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936754 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936757 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936760 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936763 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936766 | orchestrator | 2026-04-17 00:55:38.936769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-17 00:55:38.936772 | orchestrator | Friday 17 April 2026 00:48:11 +0000 (0:00:00.646) 0:02:43.699 ********** 2026-04-17 00:55:38.936775 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936778 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936793 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936796 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936802 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936805 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936808 | orchestrator | 2026-04-17 00:55:38.936811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-17 00:55:38.936814 | orchestrator | Friday 17 April 2026 00:48:12 +0000 (0:00:00.804) 0:02:44.504 ********** 2026-04-17 00:55:38.936817 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936820 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936823 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936826 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936829 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936832 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936835 | orchestrator | 2026-04-17 00:55:38.936838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-17 00:55:38.936842 | orchestrator | Friday 17 April 2026 00:48:12 +0000 (0:00:00.733) 0:02:45.238 ********** 2026-04-17 00:55:38.936845 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936848 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936851 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936854 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936857 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936860 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936863 | orchestrator | 2026-04-17 00:55:38.936866 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-17 00:55:38.936869 | orchestrator | Friday 17 April 2026 00:48:13 +0000 (0:00:00.657) 0:02:45.895 ********** 2026-04-17 00:55:38.936872 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936875 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936878 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936881 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936885 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936888 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936891 | orchestrator | 2026-04-17 00:55:38.936894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-17 00:55:38.936897 | orchestrator | Friday 17 April 2026 00:48:14 +0000 (0:00:00.795) 0:02:46.691 ********** 2026-04-17 00:55:38.936900 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936903 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936906 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936909 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936912 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936915 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936918 | orchestrator | 2026-04-17 00:55:38.936921 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-17 00:55:38.936924 | orchestrator | Friday 17 April 2026 00:48:14 +0000 (0:00:00.580) 0:02:47.272 ********** 2026-04-17 00:55:38.936927 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.936930 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.936933 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.936936 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.936940 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.936943 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.936946 | orchestrator | 2026-04-17 00:55:38.936949 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-17 00:55:38.936952 | orchestrator | Friday 17 April 2026 00:48:15 +0000 (0:00:00.868) 0:02:48.140 ********** 2026-04-17 00:55:38.936955 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.936958 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.936961 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.936964 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.936967 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.936970 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.936974 | orchestrator | 2026-04-17 00:55:38.936979 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-17 00:55:38.936984 | orchestrator | Friday 17 April 2026 00:48:17 +0000 (0:00:01.418) 0:02:49.559 ********** 2026-04-17 00:55:38.936987 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.936990 | orchestrator | 2026-04-17 00:55:38.936993 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-17 00:55:38.936996 | orchestrator | Friday 17 April 2026 00:48:18 +0000 (0:00:01.686) 0:02:51.246 ********** 2026-04-17 00:55:38.936999 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-17 00:55:38.937004 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-17 00:55:38.937009 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-17 00:55:38.937015 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-17 00:55:38.937019 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937025 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-17 00:55:38.937030 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-17 00:55:38.937035 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937040 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937050 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937055 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937060 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-17 00:55:38.937065 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937070 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937076 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937081 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937087 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-17 00:55:38.937115 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937120 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937125 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937134 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-17 00:55:38.937145 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937155 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937160 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937171 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-17 00:55:38.937175 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937180 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937184 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937188 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937190 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937194 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937203 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-17 00:55:38.937206 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937209 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937213 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-17 00:55:38.937225 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937234 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937237 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937240 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-17 00:55:38.937243 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937249 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937255 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937261 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-17 00:55:38.937267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937270 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937276 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-17 00:55:38.937282 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937285 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937288 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937291 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937297 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-17 00:55:38.937300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937306 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937315 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-17 00:55:38.937318 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937322 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937337 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937347 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937350 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-17 00:55:38.937353 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-17 00:55:38.937356 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-17 00:55:38.937359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937362 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937365 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937368 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-17 00:55:38.937372 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-17 00:55:38.937375 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-17 00:55:38.937378 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-17 00:55:38.937381 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-17 00:55:38.937384 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-17 00:55:38.937387 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-17 00:55:38.937390 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-17 00:55:38.937393 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-17 00:55:38.937397 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-17 00:55:38.937400 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-17 00:55:38.937403 | orchestrator | 2026-04-17 00:55:38.937406 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-17 00:55:38.937409 | orchestrator | Friday 17 April 2026 00:48:25 +0000 (0:00:06.989) 0:02:58.235 ********** 2026-04-17 00:55:38.937412 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937415 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937418 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937422 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.937425 | orchestrator | 2026-04-17 00:55:38.937428 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-17 00:55:38.937431 | orchestrator | Friday 17 April 2026 00:48:27 +0000 (0:00:01.164) 0:02:59.400 ********** 2026-04-17 00:55:38.937434 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937438 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937441 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937444 | orchestrator | 2026-04-17 00:55:38.937447 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-17 00:55:38.937450 | orchestrator | Friday 17 April 2026 00:48:28 +0000 (0:00:00.953) 0:03:00.353 ********** 2026-04-17 00:55:38.937455 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937458 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937462 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937465 | orchestrator | 2026-04-17 00:55:38.937468 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-17 00:55:38.937475 | orchestrator | Friday 17 April 2026 00:48:29 +0000 (0:00:01.844) 0:03:02.198 ********** 2026-04-17 00:55:38.937478 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.937481 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.937485 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.937488 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937491 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937494 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937497 | orchestrator | 2026-04-17 00:55:38.937500 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-17 00:55:38.937503 | orchestrator | Friday 17 April 2026 00:48:30 +0000 (0:00:00.749) 0:03:02.947 ********** 2026-04-17 00:55:38.937506 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.937509 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.937512 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.937515 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937518 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937522 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937525 | orchestrator | 2026-04-17 00:55:38.937528 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-17 00:55:38.937531 | orchestrator | Friday 17 April 2026 00:48:31 +0000 (0:00:01.017) 0:03:03.964 ********** 2026-04-17 00:55:38.937534 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937537 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937540 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937543 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937546 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937549 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937552 | orchestrator | 2026-04-17 00:55:38.937565 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-17 00:55:38.937568 | orchestrator | Friday 17 April 2026 00:48:32 +0000 (0:00:00.688) 0:03:04.653 ********** 2026-04-17 00:55:38.937572 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937575 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937578 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937581 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937584 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937587 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937590 | orchestrator | 2026-04-17 00:55:38.937593 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-17 00:55:38.937622 | orchestrator | Friday 17 April 2026 00:48:33 +0000 (0:00:01.077) 0:03:05.730 ********** 2026-04-17 00:55:38.937627 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937630 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937633 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937636 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937639 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937642 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937645 | orchestrator | 2026-04-17 00:55:38.937648 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-17 00:55:38.937651 | orchestrator | Friday 17 April 2026 00:48:34 +0000 (0:00:00.738) 0:03:06.469 ********** 2026-04-17 00:55:38.937654 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937657 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937660 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937663 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937666 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937669 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937672 | orchestrator | 2026-04-17 00:55:38.937675 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-17 00:55:38.937678 | orchestrator | Friday 17 April 2026 00:48:34 +0000 (0:00:00.664) 0:03:07.134 ********** 2026-04-17 00:55:38.937681 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937687 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937690 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937693 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937696 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937699 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937702 | orchestrator | 2026-04-17 00:55:38.937705 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-17 00:55:38.937708 | orchestrator | Friday 17 April 2026 00:48:36 +0000 (0:00:01.612) 0:03:08.747 ********** 2026-04-17 00:55:38.937711 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937714 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937717 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937720 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937723 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937726 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937730 | orchestrator | 2026-04-17 00:55:38.937733 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-17 00:55:38.937736 | orchestrator | Friday 17 April 2026 00:48:37 +0000 (0:00:00.742) 0:03:09.489 ********** 2026-04-17 00:55:38.937739 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937742 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937745 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937748 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.937751 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.937754 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.937757 | orchestrator | 2026-04-17 00:55:38.937760 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-17 00:55:38.937763 | orchestrator | Friday 17 April 2026 00:48:39 +0000 (0:00:02.119) 0:03:11.609 ********** 2026-04-17 00:55:38.937768 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.937771 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.937774 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.937777 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937780 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937783 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937786 | orchestrator | 2026-04-17 00:55:38.937789 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-17 00:55:38.937792 | orchestrator | Friday 17 April 2026 00:48:39 +0000 (0:00:00.512) 0:03:12.122 ********** 2026-04-17 00:55:38.937796 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.937799 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.937802 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.937805 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937808 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937811 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937814 | orchestrator | 2026-04-17 00:55:38.937817 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-17 00:55:38.937820 | orchestrator | Friday 17 April 2026 00:48:40 +0000 (0:00:00.698) 0:03:12.820 ********** 2026-04-17 00:55:38.937823 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937826 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937829 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937832 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937835 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937838 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937841 | orchestrator | 2026-04-17 00:55:38.937844 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-17 00:55:38.937847 | orchestrator | Friday 17 April 2026 00:48:40 +0000 (0:00:00.502) 0:03:13.323 ********** 2026-04-17 00:55:38.937850 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937854 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937859 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.937862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937878 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937881 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937884 | orchestrator | 2026-04-17 00:55:38.937887 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-17 00:55:38.937890 | orchestrator | Friday 17 April 2026 00:48:41 +0000 (0:00:00.762) 0:03:14.085 ********** 2026-04-17 00:55:38.937894 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-17 00:55:38.937899 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-17 00:55:38.937903 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937906 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-17 00:55:38.937910 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-17 00:55:38.937913 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937916 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-17 00:55:38.937919 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-17 00:55:38.937922 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937926 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937929 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937932 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937935 | orchestrator | 2026-04-17 00:55:38.937938 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-17 00:55:38.937941 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:00.460) 0:03:14.546 ********** 2026-04-17 00:55:38.937946 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937949 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937952 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937955 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937958 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937961 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937964 | orchestrator | 2026-04-17 00:55:38.937968 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-17 00:55:38.937971 | orchestrator | Friday 17 April 2026 00:48:42 +0000 (0:00:00.589) 0:03:15.135 ********** 2026-04-17 00:55:38.937974 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.937979 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.937982 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.937985 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.937988 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.937991 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.937994 | orchestrator | 2026-04-17 00:55:38.937997 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 00:55:38.938001 | orchestrator | Friday 17 April 2026 00:48:43 +0000 (0:00:00.443) 0:03:15.578 ********** 2026-04-17 00:55:38.938004 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938007 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938010 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938033 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938036 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938039 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938042 | orchestrator | 2026-04-17 00:55:38.938046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 00:55:38.938049 | orchestrator | Friday 17 April 2026 00:48:43 +0000 (0:00:00.625) 0:03:16.204 ********** 2026-04-17 00:55:38.938052 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938055 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938058 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938061 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938064 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938067 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938070 | orchestrator | 2026-04-17 00:55:38.938074 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 00:55:38.938087 | orchestrator | Friday 17 April 2026 00:48:44 +0000 (0:00:00.542) 0:03:16.747 ********** 2026-04-17 00:55:38.938091 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938094 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938097 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938100 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938103 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938106 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938109 | orchestrator | 2026-04-17 00:55:38.938112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 00:55:38.938115 | orchestrator | Friday 17 April 2026 00:48:44 +0000 (0:00:00.586) 0:03:17.334 ********** 2026-04-17 00:55:38.938118 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.938122 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.938125 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938128 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.938131 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938134 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938137 | orchestrator | 2026-04-17 00:55:38.938140 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 00:55:38.938143 | orchestrator | Friday 17 April 2026 00:48:45 +0000 (0:00:00.498) 0:03:17.832 ********** 2026-04-17 00:55:38.938146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938156 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938159 | orchestrator | 2026-04-17 00:55:38.938162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 00:55:38.938165 | orchestrator | Friday 17 April 2026 00:48:46 +0000 (0:00:00.526) 0:03:18.359 ********** 2026-04-17 00:55:38.938168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938180 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938185 | orchestrator | 2026-04-17 00:55:38.938191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 00:55:38.938196 | orchestrator | Friday 17 April 2026 00:48:46 +0000 (0:00:00.525) 0:03:18.884 ********** 2026-04-17 00:55:38.938201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938217 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938274 | orchestrator | 2026-04-17 00:55:38.938280 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 00:55:38.938283 | orchestrator | Friday 17 April 2026 00:48:47 +0000 (0:00:00.786) 0:03:19.670 ********** 2026-04-17 00:55:38.938287 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.938290 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.938293 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.938296 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938300 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938306 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938310 | orchestrator | 2026-04-17 00:55:38.938315 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 00:55:38.938320 | orchestrator | Friday 17 April 2026 00:48:47 +0000 (0:00:00.552) 0:03:20.223 ********** 2026-04-17 00:55:38.938326 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 00:55:38.938331 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 00:55:38.938336 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 00:55:38.938344 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-17 00:55:38.938350 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938354 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-17 00:55:38.938357 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938360 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-17 00:55:38.938363 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938366 | orchestrator | 2026-04-17 00:55:38.938369 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-17 00:55:38.938372 | orchestrator | Friday 17 April 2026 00:48:49 +0000 (0:00:01.570) 0:03:21.793 ********** 2026-04-17 00:55:38.938375 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.938378 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.938381 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.938384 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.938387 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.938390 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.938393 | orchestrator | 2026-04-17 00:55:38.938396 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.938399 | orchestrator | Friday 17 April 2026 00:48:52 +0000 (0:00:03.226) 0:03:25.020 ********** 2026-04-17 00:55:38.938402 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.938405 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.938408 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.938411 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.938414 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.938417 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.938420 | orchestrator | 2026-04-17 00:55:38.938423 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 00:55:38.938426 | orchestrator | Friday 17 April 2026 00:48:53 +0000 (0:00:00.922) 0:03:25.942 ********** 2026-04-17 00:55:38.938429 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938432 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938435 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.938445 | orchestrator | 2026-04-17 00:55:38.938448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-17 00:55:38.938464 | orchestrator | Friday 17 April 2026 00:48:54 +0000 (0:00:01.009) 0:03:26.952 ********** 2026-04-17 00:55:38.938467 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.938471 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.938474 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.938477 | orchestrator | 2026-04-17 00:55:38.938480 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-17 00:55:38.938483 | orchestrator | Friday 17 April 2026 00:48:54 +0000 (0:00:00.313) 0:03:27.265 ********** 2026-04-17 00:55:38.938486 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.938489 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.938492 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.938495 | orchestrator | 2026-04-17 00:55:38.938499 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-17 00:55:38.938502 | orchestrator | Friday 17 April 2026 00:48:55 +0000 (0:00:00.960) 0:03:28.226 ********** 2026-04-17 00:55:38.938505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:55:38.938508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:55:38.938511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:55:38.938514 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938517 | orchestrator | 2026-04-17 00:55:38.938520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-17 00:55:38.938523 | orchestrator | Friday 17 April 2026 00:48:56 +0000 (0:00:00.704) 0:03:28.930 ********** 2026-04-17 00:55:38.938526 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.938529 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.938532 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.938535 | orchestrator | 2026-04-17 00:55:38.938538 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 00:55:38.938542 | orchestrator | Friday 17 April 2026 00:48:56 +0000 (0:00:00.267) 0:03:29.198 ********** 2026-04-17 00:55:38.938545 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938548 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938551 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.938557 | orchestrator | 2026-04-17 00:55:38.938562 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-17 00:55:38.938568 | orchestrator | Friday 17 April 2026 00:48:57 +0000 (0:00:00.872) 0:03:30.071 ********** 2026-04-17 00:55:38.938573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938586 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938589 | orchestrator | 2026-04-17 00:55:38.938592 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-17 00:55:38.938605 | orchestrator | Friday 17 April 2026 00:48:58 +0000 (0:00:00.379) 0:03:30.450 ********** 2026-04-17 00:55:38.938608 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938611 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938614 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938617 | orchestrator | 2026-04-17 00:55:38.938621 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-17 00:55:38.938624 | orchestrator | Friday 17 April 2026 00:48:58 +0000 (0:00:00.389) 0:03:30.840 ********** 2026-04-17 00:55:38.938627 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938630 | orchestrator | 2026-04-17 00:55:38.938633 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-17 00:55:38.938636 | orchestrator | Friday 17 April 2026 00:48:58 +0000 (0:00:00.194) 0:03:31.034 ********** 2026-04-17 00:55:38.938642 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938647 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938650 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938653 | orchestrator | 2026-04-17 00:55:38.938656 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-17 00:55:38.938659 | orchestrator | Friday 17 April 2026 00:48:58 +0000 (0:00:00.263) 0:03:31.298 ********** 2026-04-17 00:55:38.938662 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938666 | orchestrator | 2026-04-17 00:55:38.938669 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-17 00:55:38.938672 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:00.233) 0:03:31.531 ********** 2026-04-17 00:55:38.938675 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938678 | orchestrator | 2026-04-17 00:55:38.938681 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-17 00:55:38.938684 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:00.191) 0:03:31.722 ********** 2026-04-17 00:55:38.938687 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938690 | orchestrator | 2026-04-17 00:55:38.938693 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-17 00:55:38.938696 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:00.110) 0:03:31.833 ********** 2026-04-17 00:55:38.938699 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938702 | orchestrator | 2026-04-17 00:55:38.938706 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-17 00:55:38.938709 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:00.214) 0:03:32.047 ********** 2026-04-17 00:55:38.938712 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938715 | orchestrator | 2026-04-17 00:55:38.938718 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-17 00:55:38.938721 | orchestrator | Friday 17 April 2026 00:48:59 +0000 (0:00:00.185) 0:03:32.233 ********** 2026-04-17 00:55:38.938724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938736 | orchestrator | 2026-04-17 00:55:38.938739 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-17 00:55:38.938754 | orchestrator | Friday 17 April 2026 00:49:00 +0000 (0:00:00.550) 0:03:32.784 ********** 2026-04-17 00:55:38.938758 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938761 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.938764 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.938767 | orchestrator | 2026-04-17 00:55:38.938770 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-17 00:55:38.938773 | orchestrator | Friday 17 April 2026 00:49:00 +0000 (0:00:00.460) 0:03:33.244 ********** 2026-04-17 00:55:38.938776 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938779 | orchestrator | 2026-04-17 00:55:38.938783 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-17 00:55:38.938786 | orchestrator | Friday 17 April 2026 00:49:01 +0000 (0:00:00.226) 0:03:33.470 ********** 2026-04-17 00:55:38.938790 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938796 | orchestrator | 2026-04-17 00:55:38.938804 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 00:55:38.938809 | orchestrator | Friday 17 April 2026 00:49:01 +0000 (0:00:00.203) 0:03:33.674 ********** 2026-04-17 00:55:38.938814 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938819 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938825 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.938837 | orchestrator | 2026-04-17 00:55:38.938840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-17 00:55:38.938843 | orchestrator | Friday 17 April 2026 00:49:02 +0000 (0:00:00.830) 0:03:34.504 ********** 2026-04-17 00:55:38.938849 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.938854 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.938859 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.938864 | orchestrator | 2026-04-17 00:55:38.938870 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-17 00:55:38.938874 | orchestrator | Friday 17 April 2026 00:49:02 +0000 (0:00:00.308) 0:03:34.812 ********** 2026-04-17 00:55:38.938877 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.938880 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.938883 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.938886 | orchestrator | 2026-04-17 00:55:38.938889 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-17 00:55:38.938892 | orchestrator | Friday 17 April 2026 00:49:03 +0000 (0:00:01.070) 0:03:35.883 ********** 2026-04-17 00:55:38.938895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.938899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.938902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.938905 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.938908 | orchestrator | 2026-04-17 00:55:38.938911 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-17 00:55:38.938914 | orchestrator | Friday 17 April 2026 00:49:04 +0000 (0:00:00.781) 0:03:36.664 ********** 2026-04-17 00:55:38.938917 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.938921 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.938924 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.938927 | orchestrator | 2026-04-17 00:55:38.938931 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 00:55:38.938935 | orchestrator | Friday 17 April 2026 00:49:04 +0000 (0:00:00.331) 0:03:36.995 ********** 2026-04-17 00:55:38.938938 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.938942 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.938945 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.938951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.938955 | orchestrator | 2026-04-17 00:55:38.938958 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-17 00:55:38.938962 | orchestrator | Friday 17 April 2026 00:49:05 +0000 (0:00:01.097) 0:03:38.093 ********** 2026-04-17 00:55:38.938965 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.938969 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.938972 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.938976 | orchestrator | 2026-04-17 00:55:38.938979 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-17 00:55:38.938983 | orchestrator | Friday 17 April 2026 00:49:06 +0000 (0:00:00.450) 0:03:38.544 ********** 2026-04-17 00:55:38.938987 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.938990 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.938994 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.938997 | orchestrator | 2026-04-17 00:55:38.939000 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-17 00:55:38.939004 | orchestrator | Friday 17 April 2026 00:49:07 +0000 (0:00:01.673) 0:03:40.217 ********** 2026-04-17 00:55:38.939007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.939011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.939014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.939018 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.939021 | orchestrator | 2026-04-17 00:55:38.939027 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-17 00:55:38.939030 | orchestrator | Friday 17 April 2026 00:49:08 +0000 (0:00:00.616) 0:03:40.834 ********** 2026-04-17 00:55:38.939034 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.939037 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.939041 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.939044 | orchestrator | 2026-04-17 00:55:38.939048 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-17 00:55:38.939052 | orchestrator | Friday 17 April 2026 00:49:08 +0000 (0:00:00.317) 0:03:41.151 ********** 2026-04-17 00:55:38.939057 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.939064 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.939070 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.939075 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939080 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939103 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939109 | orchestrator | 2026-04-17 00:55:38.939113 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 00:55:38.939118 | orchestrator | Friday 17 April 2026 00:49:09 +0000 (0:00:00.594) 0:03:41.746 ********** 2026-04-17 00:55:38.939123 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.939129 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.939133 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.939138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.939143 | orchestrator | 2026-04-17 00:55:38.939148 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-17 00:55:38.939152 | orchestrator | Friday 17 April 2026 00:49:10 +0000 (0:00:01.054) 0:03:42.801 ********** 2026-04-17 00:55:38.939157 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939162 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939167 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939172 | orchestrator | 2026-04-17 00:55:38.939177 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-17 00:55:38.939182 | orchestrator | Friday 17 April 2026 00:49:10 +0000 (0:00:00.305) 0:03:43.106 ********** 2026-04-17 00:55:38.939188 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.939193 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.939198 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.939203 | orchestrator | 2026-04-17 00:55:38.939208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-17 00:55:38.939213 | orchestrator | Friday 17 April 2026 00:49:12 +0000 (0:00:01.304) 0:03:44.411 ********** 2026-04-17 00:55:38.939217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:55:38.939221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:55:38.939225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:55:38.939228 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939232 | orchestrator | 2026-04-17 00:55:38.939235 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-17 00:55:38.939239 | orchestrator | Friday 17 April 2026 00:49:12 +0000 (0:00:00.630) 0:03:45.042 ********** 2026-04-17 00:55:38.939242 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939246 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939249 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939253 | orchestrator | 2026-04-17 00:55:38.939257 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-17 00:55:38.939260 | orchestrator | 2026-04-17 00:55:38.939264 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.939267 | orchestrator | Friday 17 April 2026 00:49:13 +0000 (0:00:00.610) 0:03:45.652 ********** 2026-04-17 00:55:38.939272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.939282 | orchestrator | 2026-04-17 00:55:38.939288 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.939292 | orchestrator | Friday 17 April 2026 00:49:14 +0000 (0:00:00.711) 0:03:46.364 ********** 2026-04-17 00:55:38.939298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.939302 | orchestrator | 2026-04-17 00:55:38.939307 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.939312 | orchestrator | Friday 17 April 2026 00:49:14 +0000 (0:00:00.516) 0:03:46.880 ********** 2026-04-17 00:55:38.939317 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939322 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939332 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939338 | orchestrator | 2026-04-17 00:55:38.939343 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.939348 | orchestrator | Friday 17 April 2026 00:49:15 +0000 (0:00:00.689) 0:03:47.570 ********** 2026-04-17 00:55:38.939353 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939358 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939363 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939368 | orchestrator | 2026-04-17 00:55:38.939373 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.939379 | orchestrator | Friday 17 April 2026 00:49:15 +0000 (0:00:00.575) 0:03:48.146 ********** 2026-04-17 00:55:38.939384 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939389 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939394 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939399 | orchestrator | 2026-04-17 00:55:38.939404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.939409 | orchestrator | Friday 17 April 2026 00:49:16 +0000 (0:00:00.320) 0:03:48.466 ********** 2026-04-17 00:55:38.939413 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939418 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939424 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939429 | orchestrator | 2026-04-17 00:55:38.939434 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.939439 | orchestrator | Friday 17 April 2026 00:49:16 +0000 (0:00:00.344) 0:03:48.810 ********** 2026-04-17 00:55:38.939444 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939449 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939454 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939459 | orchestrator | 2026-04-17 00:55:38.939464 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.939469 | orchestrator | Friday 17 April 2026 00:49:17 +0000 (0:00:00.659) 0:03:49.470 ********** 2026-04-17 00:55:38.939474 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939480 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939485 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939490 | orchestrator | 2026-04-17 00:55:38.939496 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.939499 | orchestrator | Friday 17 April 2026 00:49:17 +0000 (0:00:00.350) 0:03:49.820 ********** 2026-04-17 00:55:38.939524 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939530 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939535 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939540 | orchestrator | 2026-04-17 00:55:38.939545 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.939550 | orchestrator | Friday 17 April 2026 00:49:18 +0000 (0:00:00.574) 0:03:50.395 ********** 2026-04-17 00:55:38.939555 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939560 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939566 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939571 | orchestrator | 2026-04-17 00:55:38.939576 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.939587 | orchestrator | Friday 17 April 2026 00:49:18 +0000 (0:00:00.681) 0:03:51.076 ********** 2026-04-17 00:55:38.939592 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939625 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939629 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939632 | orchestrator | 2026-04-17 00:55:38.939635 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.939638 | orchestrator | Friday 17 April 2026 00:49:19 +0000 (0:00:00.808) 0:03:51.885 ********** 2026-04-17 00:55:38.939641 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939644 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939648 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939651 | orchestrator | 2026-04-17 00:55:38.939654 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.939657 | orchestrator | Friday 17 April 2026 00:49:19 +0000 (0:00:00.322) 0:03:52.208 ********** 2026-04-17 00:55:38.939660 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939663 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939666 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939669 | orchestrator | 2026-04-17 00:55:38.939672 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.939675 | orchestrator | Friday 17 April 2026 00:49:20 +0000 (0:00:00.682) 0:03:52.891 ********** 2026-04-17 00:55:38.939678 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939681 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939684 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939688 | orchestrator | 2026-04-17 00:55:38.939691 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.939694 | orchestrator | Friday 17 April 2026 00:49:20 +0000 (0:00:00.344) 0:03:53.235 ********** 2026-04-17 00:55:38.939697 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939700 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939703 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939706 | orchestrator | 2026-04-17 00:55:38.939709 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.939713 | orchestrator | Friday 17 April 2026 00:49:21 +0000 (0:00:00.417) 0:03:53.653 ********** 2026-04-17 00:55:38.939718 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939723 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939728 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939731 | orchestrator | 2026-04-17 00:55:38.939734 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.939738 | orchestrator | Friday 17 April 2026 00:49:22 +0000 (0:00:00.793) 0:03:54.446 ********** 2026-04-17 00:55:38.939741 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939744 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939747 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939750 | orchestrator | 2026-04-17 00:55:38.939753 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.939756 | orchestrator | Friday 17 April 2026 00:49:22 +0000 (0:00:00.778) 0:03:55.224 ********** 2026-04-17 00:55:38.939759 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939762 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.939765 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.939768 | orchestrator | 2026-04-17 00:55:38.939774 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.939777 | orchestrator | Friday 17 April 2026 00:49:23 +0000 (0:00:00.331) 0:03:55.556 ********** 2026-04-17 00:55:38.939780 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939783 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939786 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939789 | orchestrator | 2026-04-17 00:55:38.939792 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.939795 | orchestrator | Friday 17 April 2026 00:49:23 +0000 (0:00:00.327) 0:03:55.884 ********** 2026-04-17 00:55:38.939840 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939844 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939847 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939850 | orchestrator | 2026-04-17 00:55:38.939853 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.939856 | orchestrator | Friday 17 April 2026 00:49:23 +0000 (0:00:00.350) 0:03:56.234 ********** 2026-04-17 00:55:38.939859 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939862 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939865 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939868 | orchestrator | 2026-04-17 00:55:38.939871 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-17 00:55:38.939874 | orchestrator | Friday 17 April 2026 00:49:24 +0000 (0:00:00.778) 0:03:57.013 ********** 2026-04-17 00:55:38.939877 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939880 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939883 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939886 | orchestrator | 2026-04-17 00:55:38.939889 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-17 00:55:38.939892 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:00.449) 0:03:57.463 ********** 2026-04-17 00:55:38.939895 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.939898 | orchestrator | 2026-04-17 00:55:38.939901 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-17 00:55:38.939905 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:00.549) 0:03:58.012 ********** 2026-04-17 00:55:38.939908 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.939911 | orchestrator | 2026-04-17 00:55:38.939924 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-17 00:55:38.939927 | orchestrator | Friday 17 April 2026 00:49:25 +0000 (0:00:00.282) 0:03:58.295 ********** 2026-04-17 00:55:38.939930 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-17 00:55:38.939934 | orchestrator | 2026-04-17 00:55:38.939937 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-17 00:55:38.939940 | orchestrator | Friday 17 April 2026 00:49:26 +0000 (0:00:01.003) 0:03:59.298 ********** 2026-04-17 00:55:38.939943 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939946 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939949 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939952 | orchestrator | 2026-04-17 00:55:38.939955 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-17 00:55:38.939958 | orchestrator | Friday 17 April 2026 00:49:27 +0000 (0:00:00.296) 0:03:59.594 ********** 2026-04-17 00:55:38.939961 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.939964 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.939967 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.939970 | orchestrator | 2026-04-17 00:55:38.939973 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-17 00:55:38.939976 | orchestrator | Friday 17 April 2026 00:49:27 +0000 (0:00:00.281) 0:03:59.875 ********** 2026-04-17 00:55:38.939979 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.939982 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.939985 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.939988 | orchestrator | 2026-04-17 00:55:38.939992 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-17 00:55:38.939995 | orchestrator | Friday 17 April 2026 00:49:28 +0000 (0:00:01.110) 0:04:00.986 ********** 2026-04-17 00:55:38.939998 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940001 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940004 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940007 | orchestrator | 2026-04-17 00:55:38.940010 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-17 00:55:38.940013 | orchestrator | Friday 17 April 2026 00:49:29 +0000 (0:00:00.923) 0:04:01.910 ********** 2026-04-17 00:55:38.940018 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940021 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940024 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940054 | orchestrator | 2026-04-17 00:55:38.940058 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-17 00:55:38.940061 | orchestrator | Friday 17 April 2026 00:49:30 +0000 (0:00:00.670) 0:04:02.580 ********** 2026-04-17 00:55:38.940064 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940067 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940070 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940073 | orchestrator | 2026-04-17 00:55:38.940076 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-17 00:55:38.940079 | orchestrator | Friday 17 April 2026 00:49:30 +0000 (0:00:00.665) 0:04:03.245 ********** 2026-04-17 00:55:38.940083 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940086 | orchestrator | 2026-04-17 00:55:38.940089 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-17 00:55:38.940094 | orchestrator | Friday 17 April 2026 00:49:32 +0000 (0:00:01.223) 0:04:04.469 ********** 2026-04-17 00:55:38.940099 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940104 | orchestrator | 2026-04-17 00:55:38.940109 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-17 00:55:38.940113 | orchestrator | Friday 17 April 2026 00:49:32 +0000 (0:00:00.720) 0:04:05.189 ********** 2026-04-17 00:55:38.940118 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.940123 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.940131 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.940136 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:55:38.940141 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-17 00:55:38.940146 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:55:38.940151 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:55:38.940155 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-17 00:55:38.940160 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:55:38.940165 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-17 00:55:38.940170 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-17 00:55:38.940174 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-17 00:55:38.940179 | orchestrator | 2026-04-17 00:55:38.940183 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-17 00:55:38.940188 | orchestrator | Friday 17 April 2026 00:49:36 +0000 (0:00:03.676) 0:04:08.865 ********** 2026-04-17 00:55:38.940193 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940198 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940203 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940207 | orchestrator | 2026-04-17 00:55:38.940212 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-17 00:55:38.940217 | orchestrator | Friday 17 April 2026 00:49:37 +0000 (0:00:01.282) 0:04:10.147 ********** 2026-04-17 00:55:38.940223 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940227 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940230 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940233 | orchestrator | 2026-04-17 00:55:38.940237 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-17 00:55:38.940240 | orchestrator | Friday 17 April 2026 00:49:38 +0000 (0:00:00.331) 0:04:10.479 ********** 2026-04-17 00:55:38.940243 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940246 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940249 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940252 | orchestrator | 2026-04-17 00:55:38.940255 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-17 00:55:38.940261 | orchestrator | Friday 17 April 2026 00:49:38 +0000 (0:00:00.373) 0:04:10.852 ********** 2026-04-17 00:55:38.940264 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940282 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940286 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940289 | orchestrator | 2026-04-17 00:55:38.940292 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-17 00:55:38.940295 | orchestrator | Friday 17 April 2026 00:49:40 +0000 (0:00:02.323) 0:04:13.176 ********** 2026-04-17 00:55:38.940298 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940301 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940304 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940307 | orchestrator | 2026-04-17 00:55:38.940310 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-17 00:55:38.940313 | orchestrator | Friday 17 April 2026 00:49:42 +0000 (0:00:01.441) 0:04:14.617 ********** 2026-04-17 00:55:38.940316 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940319 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940322 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940325 | orchestrator | 2026-04-17 00:55:38.940328 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-17 00:55:38.940332 | orchestrator | Friday 17 April 2026 00:49:42 +0000 (0:00:00.307) 0:04:14.925 ********** 2026-04-17 00:55:38.940335 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940338 | orchestrator | 2026-04-17 00:55:38.940341 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-17 00:55:38.940344 | orchestrator | Friday 17 April 2026 00:49:43 +0000 (0:00:00.457) 0:04:15.383 ********** 2026-04-17 00:55:38.940347 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940350 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940353 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940356 | orchestrator | 2026-04-17 00:55:38.940359 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-17 00:55:38.940362 | orchestrator | Friday 17 April 2026 00:49:43 +0000 (0:00:00.728) 0:04:16.111 ********** 2026-04-17 00:55:38.940365 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940368 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940371 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940374 | orchestrator | 2026-04-17 00:55:38.940377 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-17 00:55:38.940380 | orchestrator | Friday 17 April 2026 00:49:44 +0000 (0:00:00.271) 0:04:16.383 ********** 2026-04-17 00:55:38.940383 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940386 | orchestrator | 2026-04-17 00:55:38.940389 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-17 00:55:38.940392 | orchestrator | Friday 17 April 2026 00:49:44 +0000 (0:00:00.429) 0:04:16.813 ********** 2026-04-17 00:55:38.940395 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940398 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940401 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940404 | orchestrator | 2026-04-17 00:55:38.940407 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-17 00:55:38.940411 | orchestrator | Friday 17 April 2026 00:49:46 +0000 (0:00:02.330) 0:04:19.143 ********** 2026-04-17 00:55:38.940414 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940417 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940420 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940423 | orchestrator | 2026-04-17 00:55:38.940426 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-17 00:55:38.940429 | orchestrator | Friday 17 April 2026 00:49:48 +0000 (0:00:01.500) 0:04:20.643 ********** 2026-04-17 00:55:38.940435 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940438 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940444 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940447 | orchestrator | 2026-04-17 00:55:38.940450 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-17 00:55:38.940453 | orchestrator | Friday 17 April 2026 00:49:50 +0000 (0:00:02.054) 0:04:22.697 ********** 2026-04-17 00:55:38.940456 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.940459 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.940462 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.940465 | orchestrator | 2026-04-17 00:55:38.940468 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-17 00:55:38.940471 | orchestrator | Friday 17 April 2026 00:49:52 +0000 (0:00:02.064) 0:04:24.762 ********** 2026-04-17 00:55:38.940474 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940477 | orchestrator | 2026-04-17 00:55:38.940480 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-17 00:55:38.940484 | orchestrator | Friday 17 April 2026 00:49:52 +0000 (0:00:00.557) 0:04:25.319 ********** 2026-04-17 00:55:38.940487 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-17 00:55:38.940490 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940493 | orchestrator | 2026-04-17 00:55:38.940496 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-17 00:55:38.940499 | orchestrator | Friday 17 April 2026 00:50:14 +0000 (0:00:21.367) 0:04:46.686 ********** 2026-04-17 00:55:38.940502 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940505 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940508 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940511 | orchestrator | 2026-04-17 00:55:38.940514 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-17 00:55:38.940517 | orchestrator | Friday 17 April 2026 00:50:20 +0000 (0:00:06.156) 0:04:52.843 ********** 2026-04-17 00:55:38.940520 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940523 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940526 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940529 | orchestrator | 2026-04-17 00:55:38.940532 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-17 00:55:38.940544 | orchestrator | Friday 17 April 2026 00:50:20 +0000 (0:00:00.287) 0:04:53.131 ********** 2026-04-17 00:55:38.940549 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-17 00:55:38.940552 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-17 00:55:38.940556 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-17 00:55:38.940561 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-17 00:55:38.940566 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-17 00:55:38.940570 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9553272c4c27f9db7e9608a012b1209d739ad78f'}])  2026-04-17 00:55:38.940574 | orchestrator | 2026-04-17 00:55:38.940577 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.940582 | orchestrator | Friday 17 April 2026 00:50:30 +0000 (0:00:10.193) 0:05:03.324 ********** 2026-04-17 00:55:38.940585 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940588 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940591 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940594 | orchestrator | 2026-04-17 00:55:38.940610 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-17 00:55:38.940616 | orchestrator | Friday 17 April 2026 00:50:31 +0000 (0:00:00.279) 0:05:03.604 ********** 2026-04-17 00:55:38.940621 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940626 | orchestrator | 2026-04-17 00:55:38.940632 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-17 00:55:38.940637 | orchestrator | Friday 17 April 2026 00:50:31 +0000 (0:00:00.628) 0:05:04.233 ********** 2026-04-17 00:55:38.940642 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940648 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940651 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940654 | orchestrator | 2026-04-17 00:55:38.940657 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-17 00:55:38.940660 | orchestrator | Friday 17 April 2026 00:50:32 +0000 (0:00:00.282) 0:05:04.515 ********** 2026-04-17 00:55:38.940663 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940666 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940669 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940672 | orchestrator | 2026-04-17 00:55:38.940675 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-17 00:55:38.940678 | orchestrator | Friday 17 April 2026 00:50:32 +0000 (0:00:00.305) 0:05:04.821 ********** 2026-04-17 00:55:38.940682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:55:38.940685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:55:38.940688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:55:38.940691 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940694 | orchestrator | 2026-04-17 00:55:38.940697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-17 00:55:38.940700 | orchestrator | Friday 17 April 2026 00:50:33 +0000 (0:00:00.571) 0:05:05.392 ********** 2026-04-17 00:55:38.940703 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940706 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940721 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940725 | orchestrator | 2026-04-17 00:55:38.940729 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-17 00:55:38.940732 | orchestrator | 2026-04-17 00:55:38.940736 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.940742 | orchestrator | Friday 17 April 2026 00:50:33 +0000 (0:00:00.666) 0:05:06.058 ********** 2026-04-17 00:55:38.940746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940750 | orchestrator | 2026-04-17 00:55:38.940753 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.940756 | orchestrator | Friday 17 April 2026 00:50:34 +0000 (0:00:00.435) 0:05:06.494 ********** 2026-04-17 00:55:38.940760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.940763 | orchestrator | 2026-04-17 00:55:38.940767 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.940771 | orchestrator | Friday 17 April 2026 00:50:34 +0000 (0:00:00.468) 0:05:06.962 ********** 2026-04-17 00:55:38.940774 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940778 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940781 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940784 | orchestrator | 2026-04-17 00:55:38.940788 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.940791 | orchestrator | Friday 17 April 2026 00:50:35 +0000 (0:00:00.880) 0:05:07.843 ********** 2026-04-17 00:55:38.940795 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940798 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940802 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940805 | orchestrator | 2026-04-17 00:55:38.940809 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.940812 | orchestrator | Friday 17 April 2026 00:50:35 +0000 (0:00:00.284) 0:05:08.127 ********** 2026-04-17 00:55:38.940816 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940820 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940823 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940826 | orchestrator | 2026-04-17 00:55:38.940830 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.940833 | orchestrator | Friday 17 April 2026 00:50:36 +0000 (0:00:00.255) 0:05:08.382 ********** 2026-04-17 00:55:38.940837 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940840 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940844 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940853 | orchestrator | 2026-04-17 00:55:38.940857 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.940860 | orchestrator | Friday 17 April 2026 00:50:36 +0000 (0:00:00.262) 0:05:08.644 ********** 2026-04-17 00:55:38.940864 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940867 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940871 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940874 | orchestrator | 2026-04-17 00:55:38.940878 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.940881 | orchestrator | Friday 17 April 2026 00:50:37 +0000 (0:00:01.006) 0:05:09.651 ********** 2026-04-17 00:55:38.940885 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940888 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940892 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940895 | orchestrator | 2026-04-17 00:55:38.940898 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.940902 | orchestrator | Friday 17 April 2026 00:50:37 +0000 (0:00:00.322) 0:05:09.974 ********** 2026-04-17 00:55:38.940908 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940911 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940915 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940918 | orchestrator | 2026-04-17 00:55:38.940921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.940925 | orchestrator | Friday 17 April 2026 00:50:37 +0000 (0:00:00.276) 0:05:10.251 ********** 2026-04-17 00:55:38.940928 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940934 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940937 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940941 | orchestrator | 2026-04-17 00:55:38.940944 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.940948 | orchestrator | Friday 17 April 2026 00:50:38 +0000 (0:00:00.681) 0:05:10.932 ********** 2026-04-17 00:55:38.940951 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940955 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940958 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.940962 | orchestrator | 2026-04-17 00:55:38.940965 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.940969 | orchestrator | Friday 17 April 2026 00:50:39 +0000 (0:00:00.898) 0:05:11.830 ********** 2026-04-17 00:55:38.940972 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.940976 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.940979 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.940982 | orchestrator | 2026-04-17 00:55:38.940986 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.940989 | orchestrator | Friday 17 April 2026 00:50:39 +0000 (0:00:00.253) 0:05:12.083 ********** 2026-04-17 00:55:38.940992 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.940995 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.940998 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941001 | orchestrator | 2026-04-17 00:55:38.941004 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.941007 | orchestrator | Friday 17 April 2026 00:50:40 +0000 (0:00:00.274) 0:05:12.358 ********** 2026-04-17 00:55:38.941010 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941013 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941016 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941019 | orchestrator | 2026-04-17 00:55:38.941022 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.941034 | orchestrator | Friday 17 April 2026 00:50:40 +0000 (0:00:00.254) 0:05:12.612 ********** 2026-04-17 00:55:38.941038 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941041 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941044 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941047 | orchestrator | 2026-04-17 00:55:38.941050 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.941053 | orchestrator | Friday 17 April 2026 00:50:40 +0000 (0:00:00.436) 0:05:13.049 ********** 2026-04-17 00:55:38.941056 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941059 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941062 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941065 | orchestrator | 2026-04-17 00:55:38.941068 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.941071 | orchestrator | Friday 17 April 2026 00:50:40 +0000 (0:00:00.263) 0:05:13.313 ********** 2026-04-17 00:55:38.941074 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941077 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941080 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941083 | orchestrator | 2026-04-17 00:55:38.941086 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.941089 | orchestrator | Friday 17 April 2026 00:50:41 +0000 (0:00:00.272) 0:05:13.586 ********** 2026-04-17 00:55:38.941092 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941095 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941098 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941101 | orchestrator | 2026-04-17 00:55:38.941104 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.941107 | orchestrator | Friday 17 April 2026 00:50:41 +0000 (0:00:00.257) 0:05:13.843 ********** 2026-04-17 00:55:38.941110 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941113 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941119 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941122 | orchestrator | 2026-04-17 00:55:38.941125 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.941128 | orchestrator | Friday 17 April 2026 00:50:41 +0000 (0:00:00.285) 0:05:14.129 ********** 2026-04-17 00:55:38.941131 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941134 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941137 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941140 | orchestrator | 2026-04-17 00:55:38.941143 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.941146 | orchestrator | Friday 17 April 2026 00:50:42 +0000 (0:00:00.512) 0:05:14.642 ********** 2026-04-17 00:55:38.941149 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941152 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941157 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941164 | orchestrator | 2026-04-17 00:55:38.941171 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-17 00:55:38.941176 | orchestrator | Friday 17 April 2026 00:50:42 +0000 (0:00:00.468) 0:05:15.110 ********** 2026-04-17 00:55:38.941181 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 00:55:38.941185 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.941190 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.941195 | orchestrator | 2026-04-17 00:55:38.941199 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-17 00:55:38.941204 | orchestrator | Friday 17 April 2026 00:50:43 +0000 (0:00:00.701) 0:05:15.812 ********** 2026-04-17 00:55:38.941208 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.941214 | orchestrator | 2026-04-17 00:55:38.941219 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-17 00:55:38.941228 | orchestrator | Friday 17 April 2026 00:50:44 +0000 (0:00:00.627) 0:05:16.439 ********** 2026-04-17 00:55:38.941232 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941237 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941242 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941246 | orchestrator | 2026-04-17 00:55:38.941252 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-17 00:55:38.941257 | orchestrator | Friday 17 April 2026 00:50:44 +0000 (0:00:00.700) 0:05:17.140 ********** 2026-04-17 00:55:38.941263 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941267 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941273 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941276 | orchestrator | 2026-04-17 00:55:38.941279 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-17 00:55:38.941282 | orchestrator | Friday 17 April 2026 00:50:45 +0000 (0:00:00.264) 0:05:17.404 ********** 2026-04-17 00:55:38.941285 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.941289 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.941292 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.941295 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-17 00:55:38.941298 | orchestrator | 2026-04-17 00:55:38.941301 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-17 00:55:38.941304 | orchestrator | Friday 17 April 2026 00:50:52 +0000 (0:00:07.778) 0:05:25.183 ********** 2026-04-17 00:55:38.941307 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941310 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941313 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941316 | orchestrator | 2026-04-17 00:55:38.941319 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-17 00:55:38.941322 | orchestrator | Friday 17 April 2026 00:50:53 +0000 (0:00:00.729) 0:05:25.912 ********** 2026-04-17 00:55:38.941325 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 00:55:38.941331 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 00:55:38.941335 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 00:55:38.941338 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.941341 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.941358 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.941362 | orchestrator | 2026-04-17 00:55:38.941365 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-17 00:55:38.941368 | orchestrator | Friday 17 April 2026 00:50:55 +0000 (0:00:01.875) 0:05:27.788 ********** 2026-04-17 00:55:38.941371 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 00:55:38.941374 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 00:55:38.941377 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 00:55:38.941380 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 00:55:38.941383 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-17 00:55:38.941386 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-17 00:55:38.941389 | orchestrator | 2026-04-17 00:55:38.941392 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-17 00:55:38.941395 | orchestrator | Friday 17 April 2026 00:50:56 +0000 (0:00:01.195) 0:05:28.983 ********** 2026-04-17 00:55:38.941398 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941401 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941404 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941407 | orchestrator | 2026-04-17 00:55:38.941410 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-17 00:55:38.941414 | orchestrator | Friday 17 April 2026 00:50:57 +0000 (0:00:00.736) 0:05:29.720 ********** 2026-04-17 00:55:38.941417 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941420 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941423 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941426 | orchestrator | 2026-04-17 00:55:38.941429 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-17 00:55:38.941432 | orchestrator | Friday 17 April 2026 00:50:57 +0000 (0:00:00.445) 0:05:30.166 ********** 2026-04-17 00:55:38.941435 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941438 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941441 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941444 | orchestrator | 2026-04-17 00:55:38.941447 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-17 00:55:38.941450 | orchestrator | Friday 17 April 2026 00:50:58 +0000 (0:00:00.271) 0:05:30.437 ********** 2026-04-17 00:55:38.941453 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.941456 | orchestrator | 2026-04-17 00:55:38.941459 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-17 00:55:38.941462 | orchestrator | Friday 17 April 2026 00:50:58 +0000 (0:00:00.446) 0:05:30.884 ********** 2026-04-17 00:55:38.941465 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941468 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941471 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941474 | orchestrator | 2026-04-17 00:55:38.941477 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-17 00:55:38.941480 | orchestrator | Friday 17 April 2026 00:50:58 +0000 (0:00:00.253) 0:05:31.138 ********** 2026-04-17 00:55:38.941483 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941486 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941489 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.941493 | orchestrator | 2026-04-17 00:55:38.941496 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-17 00:55:38.941499 | orchestrator | Friday 17 April 2026 00:50:59 +0000 (0:00:00.438) 0:05:31.577 ********** 2026-04-17 00:55:38.941504 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.941507 | orchestrator | 2026-04-17 00:55:38.941510 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-17 00:55:38.941515 | orchestrator | Friday 17 April 2026 00:50:59 +0000 (0:00:00.435) 0:05:32.013 ********** 2026-04-17 00:55:38.941518 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941521 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941524 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941528 | orchestrator | 2026-04-17 00:55:38.941531 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-17 00:55:38.941534 | orchestrator | Friday 17 April 2026 00:51:00 +0000 (0:00:01.244) 0:05:33.258 ********** 2026-04-17 00:55:38.941537 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941540 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941543 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941546 | orchestrator | 2026-04-17 00:55:38.941549 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-17 00:55:38.941552 | orchestrator | Friday 17 April 2026 00:51:02 +0000 (0:00:01.397) 0:05:34.656 ********** 2026-04-17 00:55:38.941555 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941558 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941561 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941564 | orchestrator | 2026-04-17 00:55:38.941567 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-17 00:55:38.941570 | orchestrator | Friday 17 April 2026 00:51:04 +0000 (0:00:02.072) 0:05:36.728 ********** 2026-04-17 00:55:38.941573 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941576 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941579 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941582 | orchestrator | 2026-04-17 00:55:38.941585 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-17 00:55:38.941588 | orchestrator | Friday 17 April 2026 00:51:06 +0000 (0:00:02.047) 0:05:38.775 ********** 2026-04-17 00:55:38.941591 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941594 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.941608 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-17 00:55:38.941612 | orchestrator | 2026-04-17 00:55:38.941615 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-17 00:55:38.941618 | orchestrator | Friday 17 April 2026 00:51:06 +0000 (0:00:00.315) 0:05:39.090 ********** 2026-04-17 00:55:38.941630 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-17 00:55:38.941634 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-17 00:55:38.941637 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.941640 | orchestrator | 2026-04-17 00:55:38.941643 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-17 00:55:38.941646 | orchestrator | Friday 17 April 2026 00:51:20 +0000 (0:00:13.544) 0:05:52.635 ********** 2026-04-17 00:55:38.941649 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.941652 | orchestrator | 2026-04-17 00:55:38.941656 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-17 00:55:38.941659 | orchestrator | Friday 17 April 2026 00:51:21 +0000 (0:00:01.356) 0:05:53.991 ********** 2026-04-17 00:55:38.941662 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941665 | orchestrator | 2026-04-17 00:55:38.941668 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-17 00:55:38.941671 | orchestrator | Friday 17 April 2026 00:51:21 +0000 (0:00:00.311) 0:05:54.303 ********** 2026-04-17 00:55:38.941674 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941677 | orchestrator | 2026-04-17 00:55:38.941683 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-17 00:55:38.941686 | orchestrator | Friday 17 April 2026 00:51:22 +0000 (0:00:00.148) 0:05:54.451 ********** 2026-04-17 00:55:38.941689 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-17 00:55:38.941692 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-17 00:55:38.941695 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-17 00:55:38.941698 | orchestrator | 2026-04-17 00:55:38.941701 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-17 00:55:38.941704 | orchestrator | Friday 17 April 2026 00:51:28 +0000 (0:00:06.062) 0:06:00.514 ********** 2026-04-17 00:55:38.941707 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-17 00:55:38.941710 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-17 00:55:38.941713 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-17 00:55:38.941716 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-17 00:55:38.941719 | orchestrator | 2026-04-17 00:55:38.941722 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.941725 | orchestrator | Friday 17 April 2026 00:51:32 +0000 (0:00:04.521) 0:06:05.035 ********** 2026-04-17 00:55:38.941728 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941731 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941735 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941738 | orchestrator | 2026-04-17 00:55:38.941741 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-17 00:55:38.941744 | orchestrator | Friday 17 April 2026 00:51:33 +0000 (0:00:00.980) 0:06:06.016 ********** 2026-04-17 00:55:38.941747 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.941750 | orchestrator | 2026-04-17 00:55:38.941753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-17 00:55:38.941756 | orchestrator | Friday 17 April 2026 00:51:34 +0000 (0:00:00.488) 0:06:06.505 ********** 2026-04-17 00:55:38.941759 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941762 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941767 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941770 | orchestrator | 2026-04-17 00:55:38.941773 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-17 00:55:38.941776 | orchestrator | Friday 17 April 2026 00:51:34 +0000 (0:00:00.304) 0:06:06.810 ********** 2026-04-17 00:55:38.941779 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.941782 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.941785 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.941788 | orchestrator | 2026-04-17 00:55:38.941791 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-17 00:55:38.941795 | orchestrator | Friday 17 April 2026 00:51:36 +0000 (0:00:01.564) 0:06:08.374 ********** 2026-04-17 00:55:38.941798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-17 00:55:38.941801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-17 00:55:38.941804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-17 00:55:38.941807 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.941810 | orchestrator | 2026-04-17 00:55:38.941813 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-17 00:55:38.941816 | orchestrator | Friday 17 April 2026 00:51:36 +0000 (0:00:00.645) 0:06:09.020 ********** 2026-04-17 00:55:38.941819 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.941822 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.941825 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.941828 | orchestrator | 2026-04-17 00:55:38.941832 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-17 00:55:38.941837 | orchestrator | 2026-04-17 00:55:38.941840 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.941843 | orchestrator | Friday 17 April 2026 00:51:37 +0000 (0:00:00.580) 0:06:09.600 ********** 2026-04-17 00:55:38.941846 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.941849 | orchestrator | 2026-04-17 00:55:38.941852 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.941855 | orchestrator | Friday 17 April 2026 00:51:37 +0000 (0:00:00.672) 0:06:10.273 ********** 2026-04-17 00:55:38.941868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.941871 | orchestrator | 2026-04-17 00:55:38.941874 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.941878 | orchestrator | Friday 17 April 2026 00:51:38 +0000 (0:00:00.529) 0:06:10.802 ********** 2026-04-17 00:55:38.941881 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.941884 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.941887 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.941890 | orchestrator | 2026-04-17 00:55:38.941893 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.941896 | orchestrator | Friday 17 April 2026 00:51:38 +0000 (0:00:00.309) 0:06:11.111 ********** 2026-04-17 00:55:38.941899 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.941902 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.941905 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.941908 | orchestrator | 2026-04-17 00:55:38.941911 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.941914 | orchestrator | Friday 17 April 2026 00:51:39 +0000 (0:00:00.986) 0:06:12.098 ********** 2026-04-17 00:55:38.941917 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.941920 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.941923 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.941926 | orchestrator | 2026-04-17 00:55:38.941929 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.941932 | orchestrator | Friday 17 April 2026 00:51:40 +0000 (0:00:00.758) 0:06:12.857 ********** 2026-04-17 00:55:38.941935 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.941938 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.941941 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.941944 | orchestrator | 2026-04-17 00:55:38.941947 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.941950 | orchestrator | Friday 17 April 2026 00:51:41 +0000 (0:00:00.713) 0:06:13.570 ********** 2026-04-17 00:55:38.941953 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.941956 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.941959 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.941962 | orchestrator | 2026-04-17 00:55:38.941965 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.941968 | orchestrator | Friday 17 April 2026 00:51:41 +0000 (0:00:00.299) 0:06:13.870 ********** 2026-04-17 00:55:38.941972 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.941975 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.941978 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.941981 | orchestrator | 2026-04-17 00:55:38.941984 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.941987 | orchestrator | Friday 17 April 2026 00:51:42 +0000 (0:00:00.556) 0:06:14.427 ********** 2026-04-17 00:55:38.941990 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.941993 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.941996 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.941999 | orchestrator | 2026-04-17 00:55:38.942002 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.942007 | orchestrator | Friday 17 April 2026 00:51:42 +0000 (0:00:00.311) 0:06:14.739 ********** 2026-04-17 00:55:38.942010 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942031 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942034 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942037 | orchestrator | 2026-04-17 00:55:38.942040 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.942043 | orchestrator | Friday 17 April 2026 00:51:43 +0000 (0:00:00.723) 0:06:15.463 ********** 2026-04-17 00:55:38.942046 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942049 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942052 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942055 | orchestrator | 2026-04-17 00:55:38.942060 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.942063 | orchestrator | Friday 17 April 2026 00:51:43 +0000 (0:00:00.677) 0:06:16.140 ********** 2026-04-17 00:55:38.942066 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942069 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942072 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942075 | orchestrator | 2026-04-17 00:55:38.942078 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.942082 | orchestrator | Friday 17 April 2026 00:51:44 +0000 (0:00:00.632) 0:06:16.772 ********** 2026-04-17 00:55:38.942085 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942088 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942091 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942094 | orchestrator | 2026-04-17 00:55:38.942097 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.942100 | orchestrator | Friday 17 April 2026 00:51:44 +0000 (0:00:00.300) 0:06:17.073 ********** 2026-04-17 00:55:38.942103 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942106 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942109 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942112 | orchestrator | 2026-04-17 00:55:38.942115 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.942118 | orchestrator | Friday 17 April 2026 00:51:45 +0000 (0:00:00.351) 0:06:17.425 ********** 2026-04-17 00:55:38.942121 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942124 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942127 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942130 | orchestrator | 2026-04-17 00:55:38.942133 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.942136 | orchestrator | Friday 17 April 2026 00:51:45 +0000 (0:00:00.323) 0:06:17.749 ********** 2026-04-17 00:55:38.942139 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942142 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942145 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942148 | orchestrator | 2026-04-17 00:55:38.942151 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.942154 | orchestrator | Friday 17 April 2026 00:51:46 +0000 (0:00:00.643) 0:06:18.392 ********** 2026-04-17 00:55:38.942157 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942161 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942164 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942167 | orchestrator | 2026-04-17 00:55:38.942172 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.942176 | orchestrator | Friday 17 April 2026 00:51:46 +0000 (0:00:00.276) 0:06:18.669 ********** 2026-04-17 00:55:38.942179 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942182 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942185 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942188 | orchestrator | 2026-04-17 00:55:38.942191 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.942194 | orchestrator | Friday 17 April 2026 00:51:46 +0000 (0:00:00.265) 0:06:18.934 ********** 2026-04-17 00:55:38.942199 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942202 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942205 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942208 | orchestrator | 2026-04-17 00:55:38.942211 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.942214 | orchestrator | Friday 17 April 2026 00:51:46 +0000 (0:00:00.251) 0:06:19.186 ********** 2026-04-17 00:55:38.942217 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942220 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942223 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942226 | orchestrator | 2026-04-17 00:55:38.942229 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.942233 | orchestrator | Friday 17 April 2026 00:51:47 +0000 (0:00:00.450) 0:06:19.636 ********** 2026-04-17 00:55:38.942236 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942239 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942242 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942245 | orchestrator | 2026-04-17 00:55:38.942248 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-17 00:55:38.942251 | orchestrator | Friday 17 April 2026 00:51:47 +0000 (0:00:00.457) 0:06:20.094 ********** 2026-04-17 00:55:38.942254 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942257 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942261 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942267 | orchestrator | 2026-04-17 00:55:38.942273 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-17 00:55:38.942280 | orchestrator | Friday 17 April 2026 00:51:48 +0000 (0:00:00.273) 0:06:20.367 ********** 2026-04-17 00:55:38.942287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:55:38.942291 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:55:38.942296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:55:38.942301 | orchestrator | 2026-04-17 00:55:38.942305 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-17 00:55:38.942310 | orchestrator | Friday 17 April 2026 00:51:48 +0000 (0:00:00.721) 0:06:21.089 ********** 2026-04-17 00:55:38.942315 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.942319 | orchestrator | 2026-04-17 00:55:38.942324 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-17 00:55:38.942329 | orchestrator | Friday 17 April 2026 00:51:49 +0000 (0:00:00.588) 0:06:21.678 ********** 2026-04-17 00:55:38.942333 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942338 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942343 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942347 | orchestrator | 2026-04-17 00:55:38.942352 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-17 00:55:38.942356 | orchestrator | Friday 17 April 2026 00:51:49 +0000 (0:00:00.257) 0:06:21.936 ********** 2026-04-17 00:55:38.942360 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942368 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942373 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942377 | orchestrator | 2026-04-17 00:55:38.942382 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-17 00:55:38.942387 | orchestrator | Friday 17 April 2026 00:51:49 +0000 (0:00:00.232) 0:06:22.168 ********** 2026-04-17 00:55:38.942392 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942397 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942402 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942407 | orchestrator | 2026-04-17 00:55:38.942412 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-17 00:55:38.942417 | orchestrator | Friday 17 April 2026 00:51:50 +0000 (0:00:00.916) 0:06:23.085 ********** 2026-04-17 00:55:38.942427 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942430 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942433 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942436 | orchestrator | 2026-04-17 00:55:38.942439 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-17 00:55:38.942442 | orchestrator | Friday 17 April 2026 00:51:51 +0000 (0:00:00.343) 0:06:23.428 ********** 2026-04-17 00:55:38.942445 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 00:55:38.942449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 00:55:38.942452 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 00:55:38.942455 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-17 00:55:38.942458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 00:55:38.942461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 00:55:38.942464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 00:55:38.942467 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 00:55:38.942475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 00:55:38.942478 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-17 00:55:38.942481 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 00:55:38.942484 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-17 00:55:38.942487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-17 00:55:38.942490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 00:55:38.942493 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-17 00:55:38.942496 | orchestrator | 2026-04-17 00:55:38.942499 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-17 00:55:38.942502 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:04.168) 0:06:27.597 ********** 2026-04-17 00:55:38.942505 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942508 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942511 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942515 | orchestrator | 2026-04-17 00:55:38.942518 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-17 00:55:38.942521 | orchestrator | Friday 17 April 2026 00:51:55 +0000 (0:00:00.245) 0:06:27.842 ********** 2026-04-17 00:55:38.942524 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.942527 | orchestrator | 2026-04-17 00:55:38.942530 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-17 00:55:38.942533 | orchestrator | Friday 17 April 2026 00:51:56 +0000 (0:00:00.610) 0:06:28.452 ********** 2026-04-17 00:55:38.942536 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 00:55:38.942539 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 00:55:38.942542 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-17 00:55:38.942545 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-17 00:55:38.942548 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-17 00:55:38.942551 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-17 00:55:38.942554 | orchestrator | 2026-04-17 00:55:38.942557 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-17 00:55:38.942563 | orchestrator | Friday 17 April 2026 00:51:57 +0000 (0:00:01.112) 0:06:29.565 ********** 2026-04-17 00:55:38.942566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.942569 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.942572 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.942575 | orchestrator | 2026-04-17 00:55:38.942578 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-17 00:55:38.942581 | orchestrator | Friday 17 April 2026 00:51:58 +0000 (0:00:01.673) 0:06:31.238 ********** 2026-04-17 00:55:38.942584 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 00:55:38.942587 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.942590 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.942593 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 00:55:38.942620 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 00:55:38.942623 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.942629 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 00:55:38.942632 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 00:55:38.942635 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.942638 | orchestrator | 2026-04-17 00:55:38.942641 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-17 00:55:38.942644 | orchestrator | Friday 17 April 2026 00:52:00 +0000 (0:00:01.524) 0:06:32.763 ********** 2026-04-17 00:55:38.942647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.942650 | orchestrator | 2026-04-17 00:55:38.942653 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-17 00:55:38.942656 | orchestrator | Friday 17 April 2026 00:52:02 +0000 (0:00:01.768) 0:06:34.532 ********** 2026-04-17 00:55:38.942659 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.942662 | orchestrator | 2026-04-17 00:55:38.942665 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-17 00:55:38.942668 | orchestrator | Friday 17 April 2026 00:52:02 +0000 (0:00:00.387) 0:06:34.919 ********** 2026-04-17 00:55:38.942672 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e', 'data_vg': 'ceph-2bf72114-67c4-59b2-99b4-0dc6e46ccf1e'}) 2026-04-17 00:55:38.942676 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f135813a-7de6-5823-bba0-0d89f58fd8f7', 'data_vg': 'ceph-f135813a-7de6-5823-bba0-0d89f58fd8f7'}) 2026-04-17 00:55:38.942679 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d097a065-5c07-563d-9f82-653f6f04c198', 'data_vg': 'ceph-d097a065-5c07-563d-9f82-653f6f04c198'}) 2026-04-17 00:55:38.942682 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db', 'data_vg': 'ceph-ecb05008-8fcc-5a4f-bdd9-0d58d51e77db'}) 2026-04-17 00:55:38.942687 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-96c1a302-a68f-51af-8cb0-5deb1c72c0bb', 'data_vg': 'ceph-96c1a302-a68f-51af-8cb0-5deb1c72c0bb'}) 2026-04-17 00:55:38.942691 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-037810f1-d9a1-54dd-a4a8-d143a432af64', 'data_vg': 'ceph-037810f1-d9a1-54dd-a4a8-d143a432af64'}) 2026-04-17 00:55:38.942694 | orchestrator | 2026-04-17 00:55:38.942697 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-17 00:55:38.942700 | orchestrator | Friday 17 April 2026 00:52:40 +0000 (0:00:37.672) 0:07:12.591 ********** 2026-04-17 00:55:38.942703 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942706 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942709 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942712 | orchestrator | 2026-04-17 00:55:38.942715 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-17 00:55:38.942718 | orchestrator | Friday 17 April 2026 00:52:40 +0000 (0:00:00.444) 0:07:13.035 ********** 2026-04-17 00:55:38.942724 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.942727 | orchestrator | 2026-04-17 00:55:38.942730 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-17 00:55:38.942733 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:00.476) 0:07:13.512 ********** 2026-04-17 00:55:38.942736 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942739 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942742 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942745 | orchestrator | 2026-04-17 00:55:38.942748 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-17 00:55:38.942751 | orchestrator | Friday 17 April 2026 00:52:41 +0000 (0:00:00.710) 0:07:14.222 ********** 2026-04-17 00:55:38.942754 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.942757 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.942760 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.942763 | orchestrator | 2026-04-17 00:55:38.942766 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-17 00:55:38.942769 | orchestrator | Friday 17 April 2026 00:52:43 +0000 (0:00:01.868) 0:07:16.091 ********** 2026-04-17 00:55:38.942772 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.942776 | orchestrator | 2026-04-17 00:55:38.942779 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-17 00:55:38.942782 | orchestrator | Friday 17 April 2026 00:52:44 +0000 (0:00:00.499) 0:07:16.591 ********** 2026-04-17 00:55:38.942785 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.942788 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.942791 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.942794 | orchestrator | 2026-04-17 00:55:38.942797 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-17 00:55:38.942800 | orchestrator | Friday 17 April 2026 00:52:45 +0000 (0:00:01.403) 0:07:17.995 ********** 2026-04-17 00:55:38.942803 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.942806 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.942809 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.942812 | orchestrator | 2026-04-17 00:55:38.942815 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-17 00:55:38.942818 | orchestrator | Friday 17 April 2026 00:52:47 +0000 (0:00:01.501) 0:07:19.497 ********** 2026-04-17 00:55:38.942821 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.942824 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.942827 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.942830 | orchestrator | 2026-04-17 00:55:38.942833 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-17 00:55:38.942839 | orchestrator | Friday 17 April 2026 00:52:49 +0000 (0:00:01.869) 0:07:21.366 ********** 2026-04-17 00:55:38.942842 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942845 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942848 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942852 | orchestrator | 2026-04-17 00:55:38.942855 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-17 00:55:38.942858 | orchestrator | Friday 17 April 2026 00:52:49 +0000 (0:00:00.305) 0:07:21.672 ********** 2026-04-17 00:55:38.942861 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942864 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942867 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.942870 | orchestrator | 2026-04-17 00:55:38.942873 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-17 00:55:38.942876 | orchestrator | Friday 17 April 2026 00:52:49 +0000 (0:00:00.323) 0:07:21.996 ********** 2026-04-17 00:55:38.942879 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-17 00:55:38.942882 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-17 00:55:38.942887 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-17 00:55:38.942890 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-17 00:55:38.942893 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 00:55:38.942896 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-17 00:55:38.942899 | orchestrator | 2026-04-17 00:55:38.942902 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-17 00:55:38.942905 | orchestrator | Friday 17 April 2026 00:52:51 +0000 (0:00:01.431) 0:07:23.427 ********** 2026-04-17 00:55:38.942908 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-17 00:55:38.942911 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-17 00:55:38.942914 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-17 00:55:38.942917 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 00:55:38.942920 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-17 00:55:38.942923 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-17 00:55:38.942926 | orchestrator | 2026-04-17 00:55:38.942929 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-17 00:55:38.942932 | orchestrator | Friday 17 April 2026 00:52:53 +0000 (0:00:02.172) 0:07:25.600 ********** 2026-04-17 00:55:38.942935 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-17 00:55:38.942938 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-17 00:55:38.942944 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-17 00:55:38.942947 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-17 00:55:38.942950 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-17 00:55:38.942953 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-17 00:55:38.942956 | orchestrator | 2026-04-17 00:55:38.942959 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-17 00:55:38.942962 | orchestrator | Friday 17 April 2026 00:52:57 +0000 (0:00:03.865) 0:07:29.465 ********** 2026-04-17 00:55:38.942965 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942968 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942971 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.942974 | orchestrator | 2026-04-17 00:55:38.942977 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-17 00:55:38.942980 | orchestrator | Friday 17 April 2026 00:52:59 +0000 (0:00:02.162) 0:07:31.628 ********** 2026-04-17 00:55:38.942983 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.942986 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.942989 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-17 00:55:38.942992 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.942996 | orchestrator | 2026-04-17 00:55:38.943001 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-17 00:55:38.943007 | orchestrator | Friday 17 April 2026 00:53:12 +0000 (0:00:12.770) 0:07:44.399 ********** 2026-04-17 00:55:38.943012 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943017 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943022 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943030 | orchestrator | 2026-04-17 00:55:38.943036 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.943041 | orchestrator | Friday 17 April 2026 00:53:12 +0000 (0:00:00.809) 0:07:45.209 ********** 2026-04-17 00:55:38.943046 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943051 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943056 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943062 | orchestrator | 2026-04-17 00:55:38.943067 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-17 00:55:38.943072 | orchestrator | Friday 17 April 2026 00:53:13 +0000 (0:00:00.573) 0:07:45.782 ********** 2026-04-17 00:55:38.943077 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.943086 | orchestrator | 2026-04-17 00:55:38.943090 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-17 00:55:38.943095 | orchestrator | Friday 17 April 2026 00:53:13 +0000 (0:00:00.502) 0:07:46.285 ********** 2026-04-17 00:55:38.943100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.943105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.943110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.943116 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943121 | orchestrator | 2026-04-17 00:55:38.943126 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-17 00:55:38.943132 | orchestrator | Friday 17 April 2026 00:53:14 +0000 (0:00:00.386) 0:07:46.671 ********** 2026-04-17 00:55:38.943137 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943142 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943148 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943153 | orchestrator | 2026-04-17 00:55:38.943159 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-17 00:55:38.943168 | orchestrator | Friday 17 April 2026 00:53:14 +0000 (0:00:00.300) 0:07:46.971 ********** 2026-04-17 00:55:38.943174 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943179 | orchestrator | 2026-04-17 00:55:38.943185 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-17 00:55:38.943190 | orchestrator | Friday 17 April 2026 00:53:15 +0000 (0:00:00.766) 0:07:47.737 ********** 2026-04-17 00:55:38.943196 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943201 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943204 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943207 | orchestrator | 2026-04-17 00:55:38.943210 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-17 00:55:38.943213 | orchestrator | Friday 17 April 2026 00:53:15 +0000 (0:00:00.307) 0:07:48.045 ********** 2026-04-17 00:55:38.943216 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943219 | orchestrator | 2026-04-17 00:55:38.943222 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-17 00:55:38.943225 | orchestrator | Friday 17 April 2026 00:53:15 +0000 (0:00:00.212) 0:07:48.258 ********** 2026-04-17 00:55:38.943228 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943231 | orchestrator | 2026-04-17 00:55:38.943234 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-17 00:55:38.943237 | orchestrator | Friday 17 April 2026 00:53:16 +0000 (0:00:00.227) 0:07:48.485 ********** 2026-04-17 00:55:38.943240 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943243 | orchestrator | 2026-04-17 00:55:38.943246 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-17 00:55:38.943249 | orchestrator | Friday 17 April 2026 00:53:16 +0000 (0:00:00.116) 0:07:48.601 ********** 2026-04-17 00:55:38.943252 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943255 | orchestrator | 2026-04-17 00:55:38.943258 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-17 00:55:38.943262 | orchestrator | Friday 17 April 2026 00:53:16 +0000 (0:00:00.206) 0:07:48.807 ********** 2026-04-17 00:55:38.943265 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943268 | orchestrator | 2026-04-17 00:55:38.943271 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-17 00:55:38.943274 | orchestrator | Friday 17 April 2026 00:53:16 +0000 (0:00:00.206) 0:07:49.013 ********** 2026-04-17 00:55:38.943280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.943283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.943286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.943289 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943292 | orchestrator | 2026-04-17 00:55:38.943298 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-17 00:55:38.943301 | orchestrator | Friday 17 April 2026 00:53:17 +0000 (0:00:00.364) 0:07:49.378 ********** 2026-04-17 00:55:38.943304 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943307 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943311 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943314 | orchestrator | 2026-04-17 00:55:38.943317 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-17 00:55:38.943320 | orchestrator | Friday 17 April 2026 00:53:17 +0000 (0:00:00.526) 0:07:49.904 ********** 2026-04-17 00:55:38.943323 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943326 | orchestrator | 2026-04-17 00:55:38.943329 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-17 00:55:38.943332 | orchestrator | Friday 17 April 2026 00:53:17 +0000 (0:00:00.214) 0:07:50.119 ********** 2026-04-17 00:55:38.943335 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943338 | orchestrator | 2026-04-17 00:55:38.943341 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-17 00:55:38.943344 | orchestrator | 2026-04-17 00:55:38.943347 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.943350 | orchestrator | Friday 17 April 2026 00:53:18 +0000 (0:00:00.664) 0:07:50.783 ********** 2026-04-17 00:55:38.943354 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.943358 | orchestrator | 2026-04-17 00:55:38.943361 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.943364 | orchestrator | Friday 17 April 2026 00:53:19 +0000 (0:00:01.178) 0:07:51.962 ********** 2026-04-17 00:55:38.943367 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.943370 | orchestrator | 2026-04-17 00:55:38.943373 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.943376 | orchestrator | Friday 17 April 2026 00:53:20 +0000 (0:00:01.244) 0:07:53.206 ********** 2026-04-17 00:55:38.943379 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943382 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943385 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943389 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.943392 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.943395 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.943398 | orchestrator | 2026-04-17 00:55:38.943401 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.943404 | orchestrator | Friday 17 April 2026 00:53:22 +0000 (0:00:01.170) 0:07:54.377 ********** 2026-04-17 00:55:38.943407 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943410 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943413 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943416 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943419 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943422 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943425 | orchestrator | 2026-04-17 00:55:38.943428 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.943433 | orchestrator | Friday 17 April 2026 00:53:22 +0000 (0:00:00.702) 0:07:55.079 ********** 2026-04-17 00:55:38.943436 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943439 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943442 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943445 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943448 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943451 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943456 | orchestrator | 2026-04-17 00:55:38.943461 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.943471 | orchestrator | Friday 17 April 2026 00:53:23 +0000 (0:00:00.857) 0:07:55.937 ********** 2026-04-17 00:55:38.943477 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943482 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943487 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943492 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943497 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943502 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943507 | orchestrator | 2026-04-17 00:55:38.943512 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.943518 | orchestrator | Friday 17 April 2026 00:53:24 +0000 (0:00:00.715) 0:07:56.652 ********** 2026-04-17 00:55:38.943523 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943528 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943533 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943539 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.943542 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.943545 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.943549 | orchestrator | 2026-04-17 00:55:38.943552 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.943555 | orchestrator | Friday 17 April 2026 00:53:25 +0000 (0:00:01.048) 0:07:57.700 ********** 2026-04-17 00:55:38.943558 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943561 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943564 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943567 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943570 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943573 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943576 | orchestrator | 2026-04-17 00:55:38.943579 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.943582 | orchestrator | Friday 17 April 2026 00:53:25 +0000 (0:00:00.510) 0:07:58.211 ********** 2026-04-17 00:55:38.943585 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943591 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943594 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943619 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943622 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943625 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943628 | orchestrator | 2026-04-17 00:55:38.943631 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.943634 | orchestrator | Friday 17 April 2026 00:53:26 +0000 (0:00:00.501) 0:07:58.713 ********** 2026-04-17 00:55:38.943637 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943640 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943643 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943646 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.943649 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.943652 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.943656 | orchestrator | 2026-04-17 00:55:38.943661 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.943666 | orchestrator | Friday 17 April 2026 00:53:27 +0000 (0:00:01.193) 0:07:59.906 ********** 2026-04-17 00:55:38.943670 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943675 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943681 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943686 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.943690 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.943694 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.943699 | orchestrator | 2026-04-17 00:55:38.943704 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.943710 | orchestrator | Friday 17 April 2026 00:53:28 +0000 (0:00:00.929) 0:08:00.837 ********** 2026-04-17 00:55:38.943715 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943720 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943729 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943734 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943739 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943744 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943749 | orchestrator | 2026-04-17 00:55:38.943755 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.943758 | orchestrator | Friday 17 April 2026 00:53:29 +0000 (0:00:00.922) 0:08:01.759 ********** 2026-04-17 00:55:38.943761 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943764 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943767 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943770 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.943773 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.943776 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.943779 | orchestrator | 2026-04-17 00:55:38.943782 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.943785 | orchestrator | Friday 17 April 2026 00:53:29 +0000 (0:00:00.524) 0:08:02.284 ********** 2026-04-17 00:55:38.943788 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943791 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943794 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943797 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943800 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943803 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943807 | orchestrator | 2026-04-17 00:55:38.943810 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.943813 | orchestrator | Friday 17 April 2026 00:53:30 +0000 (0:00:00.681) 0:08:02.965 ********** 2026-04-17 00:55:38.943816 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943819 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943823 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943827 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943832 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943838 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943845 | orchestrator | 2026-04-17 00:55:38.943850 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.943859 | orchestrator | Friday 17 April 2026 00:53:31 +0000 (0:00:00.494) 0:08:03.459 ********** 2026-04-17 00:55:38.943863 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.943868 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.943873 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.943878 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943883 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943887 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943892 | orchestrator | 2026-04-17 00:55:38.943898 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.943903 | orchestrator | Friday 17 April 2026 00:53:31 +0000 (0:00:00.670) 0:08:04.129 ********** 2026-04-17 00:55:38.943908 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943913 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943918 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943923 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943928 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943933 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943937 | orchestrator | 2026-04-17 00:55:38.943943 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.943948 | orchestrator | Friday 17 April 2026 00:53:32 +0000 (0:00:00.495) 0:08:04.625 ********** 2026-04-17 00:55:38.943953 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.943958 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.943963 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.943968 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:55:38.943974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:55:38.943979 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:55:38.943989 | orchestrator | 2026-04-17 00:55:38.943995 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.944000 | orchestrator | Friday 17 April 2026 00:53:32 +0000 (0:00:00.648) 0:08:05.273 ********** 2026-04-17 00:55:38.944004 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944010 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944014 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944020 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944025 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.944030 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.944035 | orchestrator | 2026-04-17 00:55:38.944040 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.944050 | orchestrator | Friday 17 April 2026 00:53:33 +0000 (0:00:00.525) 0:08:05.798 ********** 2026-04-17 00:55:38.944055 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944060 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944065 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944070 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944076 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.944080 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.944083 | orchestrator | 2026-04-17 00:55:38.944086 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.944089 | orchestrator | Friday 17 April 2026 00:53:34 +0000 (0:00:00.714) 0:08:06.513 ********** 2026-04-17 00:55:38.944092 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944095 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944098 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944104 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944111 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.944116 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.944121 | orchestrator | 2026-04-17 00:55:38.944125 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-17 00:55:38.944130 | orchestrator | Friday 17 April 2026 00:53:35 +0000 (0:00:01.092) 0:08:07.606 ********** 2026-04-17 00:55:38.944134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.944139 | orchestrator | 2026-04-17 00:55:38.944143 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-17 00:55:38.944148 | orchestrator | Friday 17 April 2026 00:53:38 +0000 (0:00:03.354) 0:08:10.960 ********** 2026-04-17 00:55:38.944153 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.944157 | orchestrator | 2026-04-17 00:55:38.944162 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-17 00:55:38.944167 | orchestrator | Friday 17 April 2026 00:53:40 +0000 (0:00:01.949) 0:08:12.910 ********** 2026-04-17 00:55:38.944172 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.944176 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.944181 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.944185 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944190 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.944195 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.944200 | orchestrator | 2026-04-17 00:55:38.944204 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-17 00:55:38.944208 | orchestrator | Friday 17 April 2026 00:53:41 +0000 (0:00:01.281) 0:08:14.191 ********** 2026-04-17 00:55:38.944213 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.944217 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.944222 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.944227 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.944232 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.944238 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.944241 | orchestrator | 2026-04-17 00:55:38.944244 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-17 00:55:38.944247 | orchestrator | Friday 17 April 2026 00:53:42 +0000 (0:00:01.097) 0:08:15.288 ********** 2026-04-17 00:55:38.944255 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.944259 | orchestrator | 2026-04-17 00:55:38.944262 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-17 00:55:38.944265 | orchestrator | Friday 17 April 2026 00:53:44 +0000 (0:00:01.204) 0:08:16.492 ********** 2026-04-17 00:55:38.944268 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.944271 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.944274 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.944277 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.944281 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.944286 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.944291 | orchestrator | 2026-04-17 00:55:38.944302 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-17 00:55:38.944308 | orchestrator | Friday 17 April 2026 00:53:45 +0000 (0:00:01.534) 0:08:18.027 ********** 2026-04-17 00:55:38.944313 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.944318 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.944323 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.944328 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.944332 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.944337 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.944342 | orchestrator | 2026-04-17 00:55:38.944347 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-17 00:55:38.944352 | orchestrator | Friday 17 April 2026 00:53:49 +0000 (0:00:03.681) 0:08:21.708 ********** 2026-04-17 00:55:38.944358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:55:38.944363 | orchestrator | 2026-04-17 00:55:38.944367 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-17 00:55:38.944372 | orchestrator | Friday 17 April 2026 00:53:50 +0000 (0:00:01.273) 0:08:22.982 ********** 2026-04-17 00:55:38.944376 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944381 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944386 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944391 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944396 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.944400 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.944407 | orchestrator | 2026-04-17 00:55:38.944410 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-17 00:55:38.944413 | orchestrator | Friday 17 April 2026 00:53:51 +0000 (0:00:00.585) 0:08:23.567 ********** 2026-04-17 00:55:38.944416 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.944419 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.944422 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.944425 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:55:38.944428 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:55:38.944431 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:55:38.944434 | orchestrator | 2026-04-17 00:55:38.944437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-17 00:55:38.944445 | orchestrator | Friday 17 April 2026 00:53:53 +0000 (0:00:02.490) 0:08:26.057 ********** 2026-04-17 00:55:38.944448 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944451 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944454 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944457 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:55:38.944460 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:55:38.944463 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:55:38.944466 | orchestrator | 2026-04-17 00:55:38.944469 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-17 00:55:38.944472 | orchestrator | 2026-04-17 00:55:38.944475 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.944482 | orchestrator | Friday 17 April 2026 00:53:54 +0000 (0:00:00.736) 0:08:26.794 ********** 2026-04-17 00:55:38.944485 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.944489 | orchestrator | 2026-04-17 00:55:38.944494 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.944499 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.608) 0:08:27.403 ********** 2026-04-17 00:55:38.944506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.944512 | orchestrator | 2026-04-17 00:55:38.944517 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.944522 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.427) 0:08:27.831 ********** 2026-04-17 00:55:38.944527 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944531 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944536 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944541 | orchestrator | 2026-04-17 00:55:38.944546 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.944550 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.399) 0:08:28.230 ********** 2026-04-17 00:55:38.944555 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944560 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944564 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944568 | orchestrator | 2026-04-17 00:55:38.944573 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.944578 | orchestrator | Friday 17 April 2026 00:53:56 +0000 (0:00:00.653) 0:08:28.883 ********** 2026-04-17 00:55:38.944583 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944588 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944593 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944698 | orchestrator | 2026-04-17 00:55:38.944710 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.944713 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:00.666) 0:08:29.549 ********** 2026-04-17 00:55:38.944716 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944719 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944722 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944725 | orchestrator | 2026-04-17 00:55:38.944728 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.944731 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:00.637) 0:08:30.187 ********** 2026-04-17 00:55:38.944734 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944737 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944740 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944744 | orchestrator | 2026-04-17 00:55:38.944747 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.944750 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.446) 0:08:30.633 ********** 2026-04-17 00:55:38.944753 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944756 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944759 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944762 | orchestrator | 2026-04-17 00:55:38.944768 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.944771 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.262) 0:08:30.896 ********** 2026-04-17 00:55:38.944774 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944777 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944780 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944783 | orchestrator | 2026-04-17 00:55:38.944786 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.944789 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.236) 0:08:31.132 ********** 2026-04-17 00:55:38.944793 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944800 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944803 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944806 | orchestrator | 2026-04-17 00:55:38.944809 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.944812 | orchestrator | Friday 17 April 2026 00:53:59 +0000 (0:00:00.729) 0:08:31.862 ********** 2026-04-17 00:55:38.944815 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944818 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944821 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944824 | orchestrator | 2026-04-17 00:55:38.944827 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.944830 | orchestrator | Friday 17 April 2026 00:54:00 +0000 (0:00:01.031) 0:08:32.894 ********** 2026-04-17 00:55:38.944833 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944836 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944840 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944843 | orchestrator | 2026-04-17 00:55:38.944846 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.944849 | orchestrator | Friday 17 April 2026 00:54:00 +0000 (0:00:00.347) 0:08:33.241 ********** 2026-04-17 00:55:38.944852 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944855 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944858 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944861 | orchestrator | 2026-04-17 00:55:38.944864 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.944867 | orchestrator | Friday 17 April 2026 00:54:01 +0000 (0:00:00.285) 0:08:33.527 ********** 2026-04-17 00:55:38.944870 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944873 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944882 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944885 | orchestrator | 2026-04-17 00:55:38.944888 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.944891 | orchestrator | Friday 17 April 2026 00:54:01 +0000 (0:00:00.340) 0:08:33.867 ********** 2026-04-17 00:55:38.944894 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944897 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944900 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944903 | orchestrator | 2026-04-17 00:55:38.944906 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.944909 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:00.690) 0:08:34.558 ********** 2026-04-17 00:55:38.944912 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944915 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944918 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944922 | orchestrator | 2026-04-17 00:55:38.944925 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.944928 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:00.383) 0:08:34.941 ********** 2026-04-17 00:55:38.944931 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944934 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944937 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944940 | orchestrator | 2026-04-17 00:55:38.944943 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.944946 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:00.286) 0:08:35.228 ********** 2026-04-17 00:55:38.944949 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944952 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944955 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944958 | orchestrator | 2026-04-17 00:55:38.944961 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.944964 | orchestrator | Friday 17 April 2026 00:54:03 +0000 (0:00:00.334) 0:08:35.563 ********** 2026-04-17 00:55:38.944967 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.944970 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.944973 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.944981 | orchestrator | 2026-04-17 00:55:38.944984 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.944987 | orchestrator | Friday 17 April 2026 00:54:03 +0000 (0:00:00.602) 0:08:36.166 ********** 2026-04-17 00:55:38.944990 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.944993 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.944996 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.944999 | orchestrator | 2026-04-17 00:55:38.945002 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.945005 | orchestrator | Friday 17 April 2026 00:54:04 +0000 (0:00:00.307) 0:08:36.474 ********** 2026-04-17 00:55:38.945008 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945037 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945040 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945043 | orchestrator | 2026-04-17 00:55:38.945046 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-17 00:55:38.945049 | orchestrator | Friday 17 April 2026 00:54:04 +0000 (0:00:00.525) 0:08:36.999 ********** 2026-04-17 00:55:38.945052 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945055 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945058 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-17 00:55:38.945061 | orchestrator | 2026-04-17 00:55:38.945065 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-17 00:55:38.945068 | orchestrator | Friday 17 April 2026 00:54:05 +0000 (0:00:00.676) 0:08:37.676 ********** 2026-04-17 00:55:38.945071 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.945074 | orchestrator | 2026-04-17 00:55:38.945077 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-17 00:55:38.945083 | orchestrator | Friday 17 April 2026 00:54:07 +0000 (0:00:01.698) 0:08:39.375 ********** 2026-04-17 00:55:38.945086 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-17 00:55:38.945091 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945094 | orchestrator | 2026-04-17 00:55:38.945097 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-17 00:55:38.945100 | orchestrator | Friday 17 April 2026 00:54:07 +0000 (0:00:00.260) 0:08:39.636 ********** 2026-04-17 00:55:38.945104 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:55:38.945111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:55:38.945114 | orchestrator | 2026-04-17 00:55:38.945117 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-17 00:55:38.945120 | orchestrator | Friday 17 April 2026 00:54:13 +0000 (0:00:06.168) 0:08:45.804 ********** 2026-04-17 00:55:38.945123 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 00:55:38.945126 | orchestrator | 2026-04-17 00:55:38.945129 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-17 00:55:38.945132 | orchestrator | Friday 17 April 2026 00:54:16 +0000 (0:00:02.649) 0:08:48.453 ********** 2026-04-17 00:55:38.945138 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945141 | orchestrator | 2026-04-17 00:55:38.945144 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-17 00:55:38.945151 | orchestrator | Friday 17 April 2026 00:54:16 +0000 (0:00:00.617) 0:08:49.071 ********** 2026-04-17 00:55:38.945154 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 00:55:38.945157 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 00:55:38.945160 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-17 00:55:38.945163 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-17 00:55:38.945166 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-17 00:55:38.945169 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-17 00:55:38.945172 | orchestrator | 2026-04-17 00:55:38.945175 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-17 00:55:38.945178 | orchestrator | Friday 17 April 2026 00:54:17 +0000 (0:00:01.217) 0:08:50.289 ********** 2026-04-17 00:55:38.945181 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.945184 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.945188 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.945191 | orchestrator | 2026-04-17 00:55:38.945194 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-17 00:55:38.945197 | orchestrator | Friday 17 April 2026 00:54:19 +0000 (0:00:01.888) 0:08:52.177 ********** 2026-04-17 00:55:38.945200 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 00:55:38.945203 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.945206 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945209 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 00:55:38.945212 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 00:55:38.945215 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 00:55:38.945218 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945221 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 00:55:38.945224 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945227 | orchestrator | 2026-04-17 00:55:38.945230 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-17 00:55:38.945233 | orchestrator | Friday 17 April 2026 00:54:21 +0000 (0:00:01.195) 0:08:53.373 ********** 2026-04-17 00:55:38.945237 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945240 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945242 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945246 | orchestrator | 2026-04-17 00:55:38.945249 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-17 00:55:38.945252 | orchestrator | Friday 17 April 2026 00:54:23 +0000 (0:00:02.062) 0:08:55.435 ********** 2026-04-17 00:55:38.945255 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945258 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945261 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945264 | orchestrator | 2026-04-17 00:55:38.945267 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-17 00:55:38.945270 | orchestrator | Friday 17 April 2026 00:54:23 +0000 (0:00:00.313) 0:08:55.748 ********** 2026-04-17 00:55:38.945273 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945276 | orchestrator | 2026-04-17 00:55:38.945279 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-17 00:55:38.945282 | orchestrator | Friday 17 April 2026 00:54:23 +0000 (0:00:00.469) 0:08:56.218 ********** 2026-04-17 00:55:38.945285 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945288 | orchestrator | 2026-04-17 00:55:38.945291 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-17 00:55:38.945295 | orchestrator | Friday 17 April 2026 00:54:24 +0000 (0:00:00.742) 0:08:56.961 ********** 2026-04-17 00:55:38.945300 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945303 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945306 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945309 | orchestrator | 2026-04-17 00:55:38.945312 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-17 00:55:38.945315 | orchestrator | Friday 17 April 2026 00:54:25 +0000 (0:00:01.234) 0:08:58.196 ********** 2026-04-17 00:55:38.945318 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945321 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945324 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945327 | orchestrator | 2026-04-17 00:55:38.945330 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-17 00:55:38.945333 | orchestrator | Friday 17 April 2026 00:54:26 +0000 (0:00:01.077) 0:08:59.273 ********** 2026-04-17 00:55:38.945336 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945339 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945342 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945345 | orchestrator | 2026-04-17 00:55:38.945348 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-17 00:55:38.945351 | orchestrator | Friday 17 April 2026 00:54:28 +0000 (0:00:01.649) 0:09:00.923 ********** 2026-04-17 00:55:38.945354 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945357 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945360 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945363 | orchestrator | 2026-04-17 00:55:38.945367 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-17 00:55:38.945370 | orchestrator | Friday 17 April 2026 00:54:30 +0000 (0:00:02.346) 0:09:03.270 ********** 2026-04-17 00:55:38.945373 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945376 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945379 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945382 | orchestrator | 2026-04-17 00:55:38.945387 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.945390 | orchestrator | Friday 17 April 2026 00:54:32 +0000 (0:00:01.188) 0:09:04.459 ********** 2026-04-17 00:55:38.945393 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945424 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945429 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945432 | orchestrator | 2026-04-17 00:55:38.945435 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-17 00:55:38.945438 | orchestrator | Friday 17 April 2026 00:54:33 +0000 (0:00:01.056) 0:09:05.515 ********** 2026-04-17 00:55:38.945441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945444 | orchestrator | 2026-04-17 00:55:38.945447 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-17 00:55:38.945450 | orchestrator | Friday 17 April 2026 00:54:33 +0000 (0:00:00.576) 0:09:06.092 ********** 2026-04-17 00:55:38.945453 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945456 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945459 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945462 | orchestrator | 2026-04-17 00:55:38.945465 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-17 00:55:38.945468 | orchestrator | Friday 17 April 2026 00:54:34 +0000 (0:00:00.302) 0:09:06.395 ********** 2026-04-17 00:55:38.945471 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.945475 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.945478 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.945481 | orchestrator | 2026-04-17 00:55:38.945484 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-17 00:55:38.945487 | orchestrator | Friday 17 April 2026 00:54:35 +0000 (0:00:01.479) 0:09:07.874 ********** 2026-04-17 00:55:38.945490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.945495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.945498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.945501 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945504 | orchestrator | 2026-04-17 00:55:38.945507 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-17 00:55:38.945510 | orchestrator | Friday 17 April 2026 00:54:36 +0000 (0:00:00.754) 0:09:08.629 ********** 2026-04-17 00:55:38.945513 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945516 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945519 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945523 | orchestrator | 2026-04-17 00:55:38.945526 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-17 00:55:38.945529 | orchestrator | 2026-04-17 00:55:38.945532 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-17 00:55:38.945535 | orchestrator | Friday 17 April 2026 00:54:36 +0000 (0:00:00.704) 0:09:09.333 ********** 2026-04-17 00:55:38.945538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945541 | orchestrator | 2026-04-17 00:55:38.945544 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-17 00:55:38.945547 | orchestrator | Friday 17 April 2026 00:54:37 +0000 (0:00:01.006) 0:09:10.339 ********** 2026-04-17 00:55:38.945550 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.945553 | orchestrator | 2026-04-17 00:55:38.945556 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-17 00:55:38.945561 | orchestrator | Friday 17 April 2026 00:54:38 +0000 (0:00:00.624) 0:09:10.963 ********** 2026-04-17 00:55:38.945564 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945567 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945577 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945586 | orchestrator | 2026-04-17 00:55:38.945591 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-17 00:55:38.945613 | orchestrator | Friday 17 April 2026 00:54:39 +0000 (0:00:00.633) 0:09:11.597 ********** 2026-04-17 00:55:38.945619 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945623 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945628 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945633 | orchestrator | 2026-04-17 00:55:38.945640 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-17 00:55:38.945645 | orchestrator | Friday 17 April 2026 00:54:40 +0000 (0:00:00.754) 0:09:12.351 ********** 2026-04-17 00:55:38.945649 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945653 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945658 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945662 | orchestrator | 2026-04-17 00:55:38.945667 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-17 00:55:38.945671 | orchestrator | Friday 17 April 2026 00:54:40 +0000 (0:00:00.708) 0:09:13.060 ********** 2026-04-17 00:55:38.945676 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945680 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945684 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945689 | orchestrator | 2026-04-17 00:55:38.945694 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-17 00:55:38.945698 | orchestrator | Friday 17 April 2026 00:54:41 +0000 (0:00:00.832) 0:09:13.892 ********** 2026-04-17 00:55:38.945703 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945708 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945713 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945718 | orchestrator | 2026-04-17 00:55:38.945724 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-17 00:55:38.945727 | orchestrator | Friday 17 April 2026 00:54:42 +0000 (0:00:00.737) 0:09:14.629 ********** 2026-04-17 00:55:38.945733 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945736 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945739 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945742 | orchestrator | 2026-04-17 00:55:38.945745 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-17 00:55:38.945752 | orchestrator | Friday 17 April 2026 00:54:42 +0000 (0:00:00.304) 0:09:14.934 ********** 2026-04-17 00:55:38.945755 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945758 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945761 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945764 | orchestrator | 2026-04-17 00:55:38.945768 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-17 00:55:38.945771 | orchestrator | Friday 17 April 2026 00:54:42 +0000 (0:00:00.306) 0:09:15.240 ********** 2026-04-17 00:55:38.945774 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945777 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945780 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945783 | orchestrator | 2026-04-17 00:55:38.945786 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-17 00:55:38.945789 | orchestrator | Friday 17 April 2026 00:54:43 +0000 (0:00:00.707) 0:09:15.948 ********** 2026-04-17 00:55:38.945792 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945795 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945798 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945801 | orchestrator | 2026-04-17 00:55:38.945804 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-17 00:55:38.945807 | orchestrator | Friday 17 April 2026 00:54:44 +0000 (0:00:00.992) 0:09:16.940 ********** 2026-04-17 00:55:38.945810 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945813 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945816 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945820 | orchestrator | 2026-04-17 00:55:38.945823 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-17 00:55:38.945826 | orchestrator | Friday 17 April 2026 00:54:44 +0000 (0:00:00.322) 0:09:17.263 ********** 2026-04-17 00:55:38.945829 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945832 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945835 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945838 | orchestrator | 2026-04-17 00:55:38.945841 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-17 00:55:38.945844 | orchestrator | Friday 17 April 2026 00:54:45 +0000 (0:00:00.294) 0:09:17.557 ********** 2026-04-17 00:55:38.945847 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945850 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945853 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945856 | orchestrator | 2026-04-17 00:55:38.945859 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-17 00:55:38.945862 | orchestrator | Friday 17 April 2026 00:54:45 +0000 (0:00:00.323) 0:09:17.881 ********** 2026-04-17 00:55:38.945866 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945869 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945872 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945875 | orchestrator | 2026-04-17 00:55:38.945878 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-17 00:55:38.945881 | orchestrator | Friday 17 April 2026 00:54:46 +0000 (0:00:00.570) 0:09:18.452 ********** 2026-04-17 00:55:38.945884 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945887 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945890 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945893 | orchestrator | 2026-04-17 00:55:38.945896 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-17 00:55:38.945899 | orchestrator | Friday 17 April 2026 00:54:46 +0000 (0:00:00.330) 0:09:18.782 ********** 2026-04-17 00:55:38.945902 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945905 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945910 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945913 | orchestrator | 2026-04-17 00:55:38.945916 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-17 00:55:38.945920 | orchestrator | Friday 17 April 2026 00:54:46 +0000 (0:00:00.300) 0:09:19.083 ********** 2026-04-17 00:55:38.945923 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945928 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945931 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945934 | orchestrator | 2026-04-17 00:55:38.945938 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-17 00:55:38.945941 | orchestrator | Friday 17 April 2026 00:54:47 +0000 (0:00:00.290) 0:09:19.373 ********** 2026-04-17 00:55:38.945944 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.945947 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.945950 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.945953 | orchestrator | 2026-04-17 00:55:38.945956 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-17 00:55:38.945959 | orchestrator | Friday 17 April 2026 00:54:47 +0000 (0:00:00.533) 0:09:19.907 ********** 2026-04-17 00:55:38.945962 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945965 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945968 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945971 | orchestrator | 2026-04-17 00:55:38.945974 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-17 00:55:38.945977 | orchestrator | Friday 17 April 2026 00:54:47 +0000 (0:00:00.322) 0:09:20.230 ********** 2026-04-17 00:55:38.945980 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.945984 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.945987 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.945990 | orchestrator | 2026-04-17 00:55:38.945993 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-17 00:55:38.945996 | orchestrator | Friday 17 April 2026 00:54:48 +0000 (0:00:00.547) 0:09:20.777 ********** 2026-04-17 00:55:38.945999 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.946002 | orchestrator | 2026-04-17 00:55:38.946005 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 00:55:38.946008 | orchestrator | Friday 17 April 2026 00:54:49 +0000 (0:00:00.732) 0:09:21.510 ********** 2026-04-17 00:55:38.946011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946039 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.946042 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.946045 | orchestrator | 2026-04-17 00:55:38.946051 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 00:55:38.946054 | orchestrator | Friday 17 April 2026 00:54:50 +0000 (0:00:01.749) 0:09:23.260 ********** 2026-04-17 00:55:38.946057 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 00:55:38.946060 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-17 00:55:38.946064 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.946067 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 00:55:38.946070 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-17 00:55:38.946073 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.946076 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 00:55:38.946079 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-17 00:55:38.946082 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.946085 | orchestrator | 2026-04-17 00:55:38.946088 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-17 00:55:38.946091 | orchestrator | Friday 17 April 2026 00:54:52 +0000 (0:00:01.298) 0:09:24.558 ********** 2026-04-17 00:55:38.946094 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946097 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.946103 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.946106 | orchestrator | 2026-04-17 00:55:38.946109 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-17 00:55:38.946112 | orchestrator | Friday 17 April 2026 00:54:52 +0000 (0:00:00.341) 0:09:24.900 ********** 2026-04-17 00:55:38.946115 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.946118 | orchestrator | 2026-04-17 00:55:38.946121 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-17 00:55:38.946124 | orchestrator | Friday 17 April 2026 00:54:53 +0000 (0:00:00.774) 0:09:25.675 ********** 2026-04-17 00:55:38.946127 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946131 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946134 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946137 | orchestrator | 2026-04-17 00:55:38.946140 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-17 00:55:38.946144 | orchestrator | Friday 17 April 2026 00:54:54 +0000 (0:00:00.819) 0:09:26.494 ********** 2026-04-17 00:55:38.946147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946150 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 00:55:38.946153 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946156 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 00:55:38.946159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946164 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-17 00:55:38.946167 | orchestrator | 2026-04-17 00:55:38.946170 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-17 00:55:38.946173 | orchestrator | Friday 17 April 2026 00:54:57 +0000 (0:00:03.653) 0:09:30.148 ********** 2026-04-17 00:55:38.946176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946179 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.946182 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946185 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.946188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:55:38.946191 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:55:38.946194 | orchestrator | 2026-04-17 00:55:38.946197 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-17 00:55:38.946201 | orchestrator | Friday 17 April 2026 00:54:59 +0000 (0:00:02.014) 0:09:32.162 ********** 2026-04-17 00:55:38.946204 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 00:55:38.946207 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.946210 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 00:55:38.946213 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.946216 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 00:55:38.946219 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.946222 | orchestrator | 2026-04-17 00:55:38.946225 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-17 00:55:38.946230 | orchestrator | Friday 17 April 2026 00:55:01 +0000 (0:00:01.316) 0:09:33.479 ********** 2026-04-17 00:55:38.946233 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-17 00:55:38.946236 | orchestrator | 2026-04-17 00:55:38.946239 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-17 00:55:38.946243 | orchestrator | Friday 17 April 2026 00:55:01 +0000 (0:00:00.243) 0:09:33.722 ********** 2026-04-17 00:55:38.946247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946263 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946266 | orchestrator | 2026-04-17 00:55:38.946270 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-17 00:55:38.946273 | orchestrator | Friday 17 April 2026 00:55:01 +0000 (0:00:00.604) 0:09:34.327 ********** 2026-04-17 00:55:38.946276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-17 00:55:38.946291 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946294 | orchestrator | 2026-04-17 00:55:38.946297 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-17 00:55:38.946301 | orchestrator | Friday 17 April 2026 00:55:02 +0000 (0:00:00.592) 0:09:34.920 ********** 2026-04-17 00:55:38.946304 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 00:55:38.946307 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 00:55:38.946310 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 00:55:38.946313 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 00:55:38.946318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-17 00:55:38.946321 | orchestrator | 2026-04-17 00:55:38.946324 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-17 00:55:38.946327 | orchestrator | Friday 17 April 2026 00:55:24 +0000 (0:00:21.734) 0:09:56.654 ********** 2026-04-17 00:55:38.946330 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946335 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.946338 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.946341 | orchestrator | 2026-04-17 00:55:38.946344 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-17 00:55:38.946347 | orchestrator | Friday 17 April 2026 00:55:24 +0000 (0:00:00.350) 0:09:57.004 ********** 2026-04-17 00:55:38.946350 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946354 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.946357 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.946360 | orchestrator | 2026-04-17 00:55:38.946363 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-17 00:55:38.946366 | orchestrator | Friday 17 April 2026 00:55:25 +0000 (0:00:00.588) 0:09:57.593 ********** 2026-04-17 00:55:38.946369 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.946372 | orchestrator | 2026-04-17 00:55:38.946375 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-17 00:55:38.946378 | orchestrator | Friday 17 April 2026 00:55:25 +0000 (0:00:00.530) 0:09:58.123 ********** 2026-04-17 00:55:38.946381 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.946384 | orchestrator | 2026-04-17 00:55:38.946387 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-17 00:55:38.946390 | orchestrator | Friday 17 April 2026 00:55:26 +0000 (0:00:00.756) 0:09:58.880 ********** 2026-04-17 00:55:38.946393 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.946397 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.946400 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.946403 | orchestrator | 2026-04-17 00:55:38.946406 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-17 00:55:38.946410 | orchestrator | Friday 17 April 2026 00:55:27 +0000 (0:00:01.370) 0:10:00.251 ********** 2026-04-17 00:55:38.946414 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.946417 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.946420 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.946423 | orchestrator | 2026-04-17 00:55:38.946426 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-17 00:55:38.946429 | orchestrator | Friday 17 April 2026 00:55:29 +0000 (0:00:01.123) 0:10:01.375 ********** 2026-04-17 00:55:38.946432 | orchestrator | changed: [testbed-node-3] 2026-04-17 00:55:38.946439 | orchestrator | changed: [testbed-node-5] 2026-04-17 00:55:38.946442 | orchestrator | changed: [testbed-node-4] 2026-04-17 00:55:38.946445 | orchestrator | 2026-04-17 00:55:38.946448 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-17 00:55:38.946451 | orchestrator | Friday 17 April 2026 00:55:30 +0000 (0:00:01.938) 0:10:03.313 ********** 2026-04-17 00:55:38.946454 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946458 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946461 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-17 00:55:38.946464 | orchestrator | 2026-04-17 00:55:38.946467 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-17 00:55:38.946470 | orchestrator | Friday 17 April 2026 00:55:33 +0000 (0:00:02.789) 0:10:06.103 ********** 2026-04-17 00:55:38.946473 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946476 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.946479 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.946482 | orchestrator | 2026-04-17 00:55:38.946486 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-17 00:55:38.946491 | orchestrator | Friday 17 April 2026 00:55:34 +0000 (0:00:00.315) 0:10:06.419 ********** 2026-04-17 00:55:38.946494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:55:38.946497 | orchestrator | 2026-04-17 00:55:38.946500 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-17 00:55:38.946503 | orchestrator | Friday 17 April 2026 00:55:34 +0000 (0:00:00.836) 0:10:07.256 ********** 2026-04-17 00:55:38.946506 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.946509 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.946512 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.946515 | orchestrator | 2026-04-17 00:55:38.946519 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-17 00:55:38.946522 | orchestrator | Friday 17 April 2026 00:55:35 +0000 (0:00:00.313) 0:10:07.569 ********** 2026-04-17 00:55:38.946525 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946528 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:55:38.946531 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:55:38.946534 | orchestrator | 2026-04-17 00:55:38.946537 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-17 00:55:38.946540 | orchestrator | Friday 17 April 2026 00:55:35 +0000 (0:00:00.308) 0:10:07.878 ********** 2026-04-17 00:55:38.946543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:55:38.946546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:55:38.946550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:55:38.946553 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:55:38.946556 | orchestrator | 2026-04-17 00:55:38.946561 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-17 00:55:38.946564 | orchestrator | Friday 17 April 2026 00:55:36 +0000 (0:00:01.260) 0:10:09.138 ********** 2026-04-17 00:55:38.946567 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:55:38.946570 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:55:38.946573 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:55:38.946576 | orchestrator | 2026-04-17 00:55:38.946579 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:55:38.946582 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-17 00:55:38.946586 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-17 00:55:38.946589 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-17 00:55:38.946592 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-17 00:55:38.946595 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-17 00:55:38.946609 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-17 00:55:38.946614 | orchestrator | 2026-04-17 00:55:38.946619 | orchestrator | 2026-04-17 00:55:38.946625 | orchestrator | 2026-04-17 00:55:38.946629 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:55:38.946633 | orchestrator | Friday 17 April 2026 00:55:37 +0000 (0:00:00.252) 0:10:09.390 ********** 2026-04-17 00:55:38.946636 | orchestrator | =============================================================================== 2026-04-17 00:55:38.946641 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 63.19s 2026-04-17 00:55:38.946644 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.67s 2026-04-17 00:55:38.946649 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 21.73s 2026-04-17 00:55:38.946652 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.37s 2026-04-17 00:55:38.946655 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.54s 2026-04-17 00:55:38.946658 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.77s 2026-04-17 00:55:38.946661 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.19s 2026-04-17 00:55:38.946664 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.78s 2026-04-17 00:55:38.946668 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.99s 2026-04-17 00:55:38.946671 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.17s 2026-04-17 00:55:38.946674 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.16s 2026-04-17 00:55:38.946677 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.06s 2026-04-17 00:55:38.946680 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.52s 2026-04-17 00:55:38.946683 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.17s 2026-04-17 00:55:38.946686 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.87s 2026-04-17 00:55:38.946689 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.87s 2026-04-17 00:55:38.946692 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.68s 2026-04-17 00:55:38.946695 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.68s 2026-04-17 00:55:38.946698 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.65s 2026-04-17 00:55:38.946701 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.50s 2026-04-17 00:55:38.946704 | orchestrator | 2026-04-17 00:55:38 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:38.946708 | orchestrator | 2026-04-17 00:55:38 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:38.946711 | orchestrator | 2026-04-17 00:55:38 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:38.946714 | orchestrator | 2026-04-17 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:41.991919 | orchestrator | 2026-04-17 00:55:41 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:41.994796 | orchestrator | 2026-04-17 00:55:41 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:41.996738 | orchestrator | 2026-04-17 00:55:41 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:41.997039 | orchestrator | 2026-04-17 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:45.056113 | orchestrator | 2026-04-17 00:55:45 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:45.058948 | orchestrator | 2026-04-17 00:55:45 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:45.061678 | orchestrator | 2026-04-17 00:55:45 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:45.062092 | orchestrator | 2026-04-17 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:48.119680 | orchestrator | 2026-04-17 00:55:48 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:48.121735 | orchestrator | 2026-04-17 00:55:48 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:48.123863 | orchestrator | 2026-04-17 00:55:48 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:48.123980 | orchestrator | 2026-04-17 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:51.174289 | orchestrator | 2026-04-17 00:55:51 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:51.175787 | orchestrator | 2026-04-17 00:55:51 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:51.178332 | orchestrator | 2026-04-17 00:55:51 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:51.178396 | orchestrator | 2026-04-17 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:54.227332 | orchestrator | 2026-04-17 00:55:54 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:54.232031 | orchestrator | 2026-04-17 00:55:54 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:54.234193 | orchestrator | 2026-04-17 00:55:54 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:54.234250 | orchestrator | 2026-04-17 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:55:57.288944 | orchestrator | 2026-04-17 00:55:57 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:55:57.290864 | orchestrator | 2026-04-17 00:55:57 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:55:57.292955 | orchestrator | 2026-04-17 00:55:57 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:55:57.293009 | orchestrator | 2026-04-17 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:00.343772 | orchestrator | 2026-04-17 00:56:00 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:00.345192 | orchestrator | 2026-04-17 00:56:00 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:00.347154 | orchestrator | 2026-04-17 00:56:00 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:00.347223 | orchestrator | 2026-04-17 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:03.387771 | orchestrator | 2026-04-17 00:56:03 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:03.389538 | orchestrator | 2026-04-17 00:56:03 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:03.392568 | orchestrator | 2026-04-17 00:56:03 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:03.392609 | orchestrator | 2026-04-17 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:06.437470 | orchestrator | 2026-04-17 00:56:06 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:06.438392 | orchestrator | 2026-04-17 00:56:06 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:06.439648 | orchestrator | 2026-04-17 00:56:06 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:06.439881 | orchestrator | 2026-04-17 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:09.489375 | orchestrator | 2026-04-17 00:56:09 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:09.491720 | orchestrator | 2026-04-17 00:56:09 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:09.494342 | orchestrator | 2026-04-17 00:56:09 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:09.494612 | orchestrator | 2026-04-17 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:12.544348 | orchestrator | 2026-04-17 00:56:12 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:12.544844 | orchestrator | 2026-04-17 00:56:12 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:12.545945 | orchestrator | 2026-04-17 00:56:12 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:12.545979 | orchestrator | 2026-04-17 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:15.603768 | orchestrator | 2026-04-17 00:56:15 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:15.605711 | orchestrator | 2026-04-17 00:56:15 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:15.608284 | orchestrator | 2026-04-17 00:56:15 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:15.608430 | orchestrator | 2026-04-17 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:18.658372 | orchestrator | 2026-04-17 00:56:18 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:18.660735 | orchestrator | 2026-04-17 00:56:18 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:18.662449 | orchestrator | 2026-04-17 00:56:18 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:18.662516 | orchestrator | 2026-04-17 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:21.702448 | orchestrator | 2026-04-17 00:56:21 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:21.703288 | orchestrator | 2026-04-17 00:56:21 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:21.704230 | orchestrator | 2026-04-17 00:56:21 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:21.704407 | orchestrator | 2026-04-17 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:24.747743 | orchestrator | 2026-04-17 00:56:24 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:24.749560 | orchestrator | 2026-04-17 00:56:24 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:24.751436 | orchestrator | 2026-04-17 00:56:24 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:24.751500 | orchestrator | 2026-04-17 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:27.794565 | orchestrator | 2026-04-17 00:56:27 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:27.797958 | orchestrator | 2026-04-17 00:56:27 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:27.800305 | orchestrator | 2026-04-17 00:56:27 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:27.800360 | orchestrator | 2026-04-17 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:30.843481 | orchestrator | 2026-04-17 00:56:30 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:30.845185 | orchestrator | 2026-04-17 00:56:30 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:30.847368 | orchestrator | 2026-04-17 00:56:30 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:30.847439 | orchestrator | 2026-04-17 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:33.889486 | orchestrator | 2026-04-17 00:56:33 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:33.891350 | orchestrator | 2026-04-17 00:56:33 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:33.892809 | orchestrator | 2026-04-17 00:56:33 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:33.892838 | orchestrator | 2026-04-17 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:36.937820 | orchestrator | 2026-04-17 00:56:36 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:36.939914 | orchestrator | 2026-04-17 00:56:36 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:36.943253 | orchestrator | 2026-04-17 00:56:36 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:36.943321 | orchestrator | 2026-04-17 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:39.988393 | orchestrator | 2026-04-17 00:56:39 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:39.990197 | orchestrator | 2026-04-17 00:56:39 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:39.992613 | orchestrator | 2026-04-17 00:56:39 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state STARTED 2026-04-17 00:56:39.992679 | orchestrator | 2026-04-17 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:43.032829 | orchestrator | 2026-04-17 00:56:43 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:43.036308 | orchestrator | 2026-04-17 00:56:43 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state STARTED 2026-04-17 00:56:43.036388 | orchestrator | 2026-04-17 00:56:43 | INFO  | Task 17f4ea40-8eae-4b81-935f-ec23e6590f64 is in state SUCCESS 2026-04-17 00:56:43.037326 | orchestrator | 2026-04-17 00:56:43.037389 | orchestrator | 2026-04-17 00:56:43.037398 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:56:43.037406 | orchestrator | 2026-04-17 00:56:43.037413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:56:43.037420 | orchestrator | Friday 17 April 2026 00:53:54 +0000 (0:00:00.294) 0:00:00.294 ********** 2026-04-17 00:56:43.037426 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:43.037433 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:43.037439 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:43.037444 | orchestrator | 2026-04-17 00:56:43.037450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:56:43.037456 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.258) 0:00:00.553 ********** 2026-04-17 00:56:43.037463 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-17 00:56:43.037469 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-17 00:56:43.037475 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-17 00:56:43.037481 | orchestrator | 2026-04-17 00:56:43.037488 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-17 00:56:43.037529 | orchestrator | 2026-04-17 00:56:43.037536 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 00:56:43.037542 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.249) 0:00:00.802 ********** 2026-04-17 00:56:43.037549 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:43.037555 | orchestrator | 2026-04-17 00:56:43.037562 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-17 00:56:43.037569 | orchestrator | Friday 17 April 2026 00:53:55 +0000 (0:00:00.510) 0:00:01.312 ********** 2026-04-17 00:56:43.037576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:56:43.037605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:56:43.037612 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-17 00:56:43.037618 | orchestrator | 2026-04-17 00:56:43.037624 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-17 00:56:43.037630 | orchestrator | Friday 17 April 2026 00:53:56 +0000 (0:00:00.939) 0:00:02.252 ********** 2026-04-17 00:56:43.037774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037861 | orchestrator | 2026-04-17 00:56:43.037868 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 00:56:43.037878 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:01.239) 0:00:03.491 ********** 2026-04-17 00:56:43.037885 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:43.037892 | orchestrator | 2026-04-17 00:56:43.037899 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-17 00:56:43.037906 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.485) 0:00:03.977 ********** 2026-04-17 00:56:43.037922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.037949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.037982 | orchestrator | 2026-04-17 00:56:43.037989 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-17 00:56:43.037996 | orchestrator | Friday 17 April 2026 00:54:01 +0000 (0:00:02.969) 0:00:06.947 ********** 2026-04-17 00:56:43.038003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038075 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:43.038086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038111 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:43.038118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038131 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:43.038139 | orchestrator | 2026-04-17 00:56:43.038146 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-17 00:56:43.038152 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:00.737) 0:00:07.684 ********** 2026-04-17 00:56:43.038161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038186 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:43.038192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038206 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:43.038216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-17 00:56:43.038228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-17 00:56:43.038240 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:43.038248 | orchestrator | 2026-04-17 00:56:43.038255 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-17 00:56:43.038262 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:00.784) 0:00:08.468 ********** 2026-04-17 00:56:43.038270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038346 | orchestrator | 2026-04-17 00:56:43.038354 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-17 00:56:43.038360 | orchestrator | Friday 17 April 2026 00:54:05 +0000 (0:00:02.946) 0:00:11.415 ********** 2026-04-17 00:56:43.038368 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.038375 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:43.038382 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:43.038388 | orchestrator | 2026-04-17 00:56:43.038394 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-17 00:56:43.038401 | orchestrator | Friday 17 April 2026 00:54:08 +0000 (0:00:02.935) 0:00:14.351 ********** 2026-04-17 00:56:43.038407 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.038413 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:43.038420 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:43.038427 | orchestrator | 2026-04-17 00:56:43.038434 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-17 00:56:43.038441 | orchestrator | Friday 17 April 2026 00:54:10 +0000 (0:00:01.567) 0:00:15.919 ********** 2026-04-17 00:56:43.038451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-17 00:56:43.038484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-17 00:56:43.038544 | orchestrator | 2026-04-17 00:56:43.038551 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 00:56:43.038558 | orchestrator | Friday 17 April 2026 00:54:12 +0000 (0:00:01.986) 0:00:17.905 ********** 2026-04-17 00:56:43.038567 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:43.038574 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:43.038582 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:43.038590 | orchestrator | 2026-04-17 00:56:43.038596 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 00:56:43.038603 | orchestrator | Friday 17 April 2026 00:54:12 +0000 (0:00:00.380) 0:00:18.286 ********** 2026-04-17 00:56:43.038610 | orchestrator | 2026-04-17 00:56:43.038617 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 00:56:43.038624 | orchestrator | Friday 17 April 2026 00:54:12 +0000 (0:00:00.059) 0:00:18.345 ********** 2026-04-17 00:56:43.038631 | orchestrator | 2026-04-17 00:56:43.038639 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-17 00:56:43.038647 | orchestrator | Friday 17 April 2026 00:54:12 +0000 (0:00:00.080) 0:00:18.425 ********** 2026-04-17 00:56:43.038654 | orchestrator | 2026-04-17 00:56:43.038661 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-17 00:56:43.038669 | orchestrator | Friday 17 April 2026 00:54:12 +0000 (0:00:00.059) 0:00:18.485 ********** 2026-04-17 00:56:43.038677 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:43.038684 | orchestrator | 2026-04-17 00:56:43.038691 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-17 00:56:43.038699 | orchestrator | Friday 17 April 2026 00:54:13 +0000 (0:00:00.211) 0:00:18.696 ********** 2026-04-17 00:56:43.038707 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:43.038715 | orchestrator | 2026-04-17 00:56:43.038722 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-17 00:56:43.038730 | orchestrator | Friday 17 April 2026 00:54:13 +0000 (0:00:00.197) 0:00:18.894 ********** 2026-04-17 00:56:43.038737 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.038744 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:43.038751 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:43.038758 | orchestrator | 2026-04-17 00:56:43.038766 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-17 00:56:43.038779 | orchestrator | Friday 17 April 2026 00:55:19 +0000 (0:01:06.539) 0:01:25.434 ********** 2026-04-17 00:56:43.038786 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.038795 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:43.038804 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:43.038812 | orchestrator | 2026-04-17 00:56:43.038820 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-17 00:56:43.038828 | orchestrator | Friday 17 April 2026 00:56:25 +0000 (0:01:05.992) 0:02:31.427 ********** 2026-04-17 00:56:43.038836 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:43.038844 | orchestrator | 2026-04-17 00:56:43.038852 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-17 00:56:43.038860 | orchestrator | Friday 17 April 2026 00:56:26 +0000 (0:00:00.664) 0:02:32.091 ********** 2026-04-17 00:56:43.038869 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:43.038879 | orchestrator | 2026-04-17 00:56:43.038888 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-17 00:56:43.038896 | orchestrator | Friday 17 April 2026 00:56:29 +0000 (0:00:02.890) 0:02:34.982 ********** 2026-04-17 00:56:43.038904 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:43.038911 | orchestrator | 2026-04-17 00:56:43.038920 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-17 00:56:43.038928 | orchestrator | Friday 17 April 2026 00:56:31 +0000 (0:00:02.523) 0:02:37.506 ********** 2026-04-17 00:56:43.038936 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:43.038943 | orchestrator | 2026-04-17 00:56:43.038955 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-17 00:56:43.038963 | orchestrator | Friday 17 April 2026 00:56:34 +0000 (0:00:02.389) 0:02:39.895 ********** 2026-04-17 00:56:43.038971 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.038980 | orchestrator | 2026-04-17 00:56:43.038988 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-17 00:56:43.038996 | orchestrator | Friday 17 April 2026 00:56:37 +0000 (0:00:02.840) 0:02:42.735 ********** 2026-04-17 00:56:43.039004 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:43.039011 | orchestrator | 2026-04-17 00:56:43.039019 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:56:43.039028 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 00:56:43.039038 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:56:43.039051 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 00:56:43.039059 | orchestrator | 2026-04-17 00:56:43.039067 | orchestrator | 2026-04-17 00:56:43.039075 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:56:43.039083 | orchestrator | Friday 17 April 2026 00:56:40 +0000 (0:00:03.080) 0:02:45.816 ********** 2026-04-17 00:56:43.039091 | orchestrator | =============================================================================== 2026-04-17 00:56:43.039100 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.54s 2026-04-17 00:56:43.039108 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 65.99s 2026-04-17 00:56:43.039116 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.08s 2026-04-17 00:56:43.039124 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.97s 2026-04-17 00:56:43.039132 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.94s 2026-04-17 00:56:43.039140 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.94s 2026-04-17 00:56:43.039147 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.89s 2026-04-17 00:56:43.039167 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.84s 2026-04-17 00:56:43.039175 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.52s 2026-04-17 00:56:43.039183 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.39s 2026-04-17 00:56:43.039191 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.99s 2026-04-17 00:56:43.039199 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.57s 2026-04-17 00:56:43.039207 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.24s 2026-04-17 00:56:43.039215 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.94s 2026-04-17 00:56:43.039223 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.79s 2026-04-17 00:56:43.039230 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.74s 2026-04-17 00:56:43.039238 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-04-17 00:56:43.039245 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-04-17 00:56:43.039253 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-04-17 00:56:43.039260 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.38s 2026-04-17 00:56:43.039267 | orchestrator | 2026-04-17 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:46.080152 | orchestrator | 2026-04-17 00:56:46 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:46.089191 | orchestrator | 2026-04-17 00:56:46 | INFO  | Task 42b7fa7a-23ab-47f5-baf9-e817ad0ccf9d is in state SUCCESS 2026-04-17 00:56:46.089259 | orchestrator | 2026-04-17 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:46.090478 | orchestrator | 2026-04-17 00:56:46.090564 | orchestrator | 2026-04-17 00:56:46.090585 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-17 00:56:46.090616 | orchestrator | 2026-04-17 00:56:46.090621 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-17 00:56:46.090626 | orchestrator | Friday 17 April 2026 00:53:54 +0000 (0:00:00.102) 0:00:00.102 ********** 2026-04-17 00:56:46.090630 | orchestrator | ok: [localhost] => { 2026-04-17 00:56:46.090635 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-17 00:56:46.090640 | orchestrator | } 2026-04-17 00:56:46.090644 | orchestrator | 2026-04-17 00:56:46.090648 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-17 00:56:46.090652 | orchestrator | Friday 17 April 2026 00:53:54 +0000 (0:00:00.045) 0:00:00.147 ********** 2026-04-17 00:56:46.090656 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-17 00:56:46.090662 | orchestrator | ...ignoring 2026-04-17 00:56:46.090666 | orchestrator | 2026-04-17 00:56:46.090669 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-17 00:56:46.090673 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:02.771) 0:00:02.919 ********** 2026-04-17 00:56:46.090689 | orchestrator | skipping: [localhost] 2026-04-17 00:56:46.090693 | orchestrator | 2026-04-17 00:56:46.090697 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-17 00:56:46.090701 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:00.037) 0:00:02.956 ********** 2026-04-17 00:56:46.090705 | orchestrator | ok: [localhost] 2026-04-17 00:56:46.090709 | orchestrator | 2026-04-17 00:56:46.090713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:56:46.090716 | orchestrator | 2026-04-17 00:56:46.090720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:56:46.090737 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:00.189) 0:00:03.146 ********** 2026-04-17 00:56:46.090742 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.090745 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.090749 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.090753 | orchestrator | 2026-04-17 00:56:46.090756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:56:46.090760 | orchestrator | Friday 17 April 2026 00:53:57 +0000 (0:00:00.299) 0:00:03.445 ********** 2026-04-17 00:56:46.090764 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 00:56:46.090768 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 00:56:46.090772 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 00:56:46.090776 | orchestrator | 2026-04-17 00:56:46.090788 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 00:56:46.090792 | orchestrator | 2026-04-17 00:56:46.090796 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 00:56:46.090800 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.379) 0:00:03.824 ********** 2026-04-17 00:56:46.090804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 00:56:46.090809 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 00:56:46.090812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 00:56:46.090816 | orchestrator | 2026-04-17 00:56:46.090820 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 00:56:46.090824 | orchestrator | Friday 17 April 2026 00:53:58 +0000 (0:00:00.355) 0:00:04.180 ********** 2026-04-17 00:56:46.090827 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:46.090915 | orchestrator | 2026-04-17 00:56:46.090921 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-17 00:56:46.090924 | orchestrator | Friday 17 April 2026 00:53:59 +0000 (0:00:00.709) 0:00:04.890 ********** 2026-04-17 00:56:46.090942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.090951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.090962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.090967 | orchestrator | 2026-04-17 00:56:46.090975 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-17 00:56:46.090979 | orchestrator | Friday 17 April 2026 00:54:02 +0000 (0:00:03.373) 0:00:08.263 ********** 2026-04-17 00:56:46.090983 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.090988 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.090991 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.090995 | orchestrator | 2026-04-17 00:56:46.090999 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-17 00:56:46.091007 | orchestrator | Friday 17 April 2026 00:54:03 +0000 (0:00:00.656) 0:00:08.919 ********** 2026-04-17 00:56:46.091010 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091014 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091018 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091022 | orchestrator | 2026-04-17 00:56:46.091025 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-17 00:56:46.091029 | orchestrator | Friday 17 April 2026 00:54:04 +0000 (0:00:01.444) 0:00:10.364 ********** 2026-04-17 00:56:46.091035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091058 | orchestrator | 2026-04-17 00:56:46.091062 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-17 00:56:46.091066 | orchestrator | Friday 17 April 2026 00:54:09 +0000 (0:00:04.176) 0:00:14.541 ********** 2026-04-17 00:56:46.091076 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091081 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091084 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091088 | orchestrator | 2026-04-17 00:56:46.091092 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-17 00:56:46.091095 | orchestrator | Friday 17 April 2026 00:54:10 +0000 (0:00:01.143) 0:00:15.685 ********** 2026-04-17 00:56:46.091099 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:46.091103 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:46.091107 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091110 | orchestrator | 2026-04-17 00:56:46.091114 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 00:56:46.091118 | orchestrator | Friday 17 April 2026 00:54:13 +0000 (0:00:03.493) 0:00:19.178 ********** 2026-04-17 00:56:46.091121 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:46.091125 | orchestrator | 2026-04-17 00:56:46.091129 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-17 00:56:46.091133 | orchestrator | Friday 17 April 2026 00:54:14 +0000 (0:00:00.487) 0:00:19.666 ********** 2026-04-17 00:56:46.091141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091148 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091159 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091176 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091180 | orchestrator | 2026-04-17 00:56:46.091184 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-17 00:56:46.091187 | orchestrator | Friday 17 April 2026 00:54:16 +0000 (0:00:02.409) 0:00:22.076 ********** 2026-04-17 00:56:46.091194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091198 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091212 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091223 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091227 | orchestrator | 2026-04-17 00:56:46.091231 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-17 00:56:46.091234 | orchestrator | Friday 17 April 2026 00:54:18 +0000 (0:00:02.244) 0:00:24.320 ********** 2026-04-17 00:56:46.091238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091249 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091264 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-17 00:56:46.091275 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091279 | orchestrator | 2026-04-17 00:56:46.091283 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-17 00:56:46.091287 | orchestrator | Friday 17 April 2026 00:54:21 +0000 (0:00:02.441) 0:00:26.762 ********** 2026-04-17 00:56:46.091298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-17 00:56:46.091323 | orchestrator | 2026-04-17 00:56:46.091327 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-17 00:56:46.091330 | orchestrator | Friday 17 April 2026 00:54:24 +0000 (0:00:02.885) 0:00:29.648 ********** 2026-04-17 00:56:46.091334 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091338 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:46.091342 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:46.091346 | orchestrator | 2026-04-17 00:56:46.091349 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-17 00:56:46.091354 | orchestrator | Friday 17 April 2026 00:54:24 +0000 (0:00:00.744) 0:00:30.392 ********** 2026-04-17 00:56:46.091358 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091362 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091366 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.091370 | orchestrator | 2026-04-17 00:56:46.091374 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-17 00:56:46.091377 | orchestrator | Friday 17 April 2026 00:54:25 +0000 (0:00:00.299) 0:00:30.691 ********** 2026-04-17 00:56:46.091381 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091386 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091390 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.091394 | orchestrator | 2026-04-17 00:56:46.091398 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-17 00:56:46.091402 | orchestrator | Friday 17 April 2026 00:54:25 +0000 (0:00:00.313) 0:00:31.005 ********** 2026-04-17 00:56:46.091410 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-17 00:56:46.091415 | orchestrator | ...ignoring 2026-04-17 00:56:46.091419 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-17 00:56:46.091423 | orchestrator | ...ignoring 2026-04-17 00:56:46.091427 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-17 00:56:46.091431 | orchestrator | ...ignoring 2026-04-17 00:56:46.091435 | orchestrator | 2026-04-17 00:56:46.091439 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-17 00:56:46.091444 | orchestrator | Friday 17 April 2026 00:54:36 +0000 (0:00:11.078) 0:00:42.083 ********** 2026-04-17 00:56:46.091448 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091452 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091457 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.091462 | orchestrator | 2026-04-17 00:56:46.091466 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-17 00:56:46.091470 | orchestrator | Friday 17 April 2026 00:54:37 +0000 (0:00:00.486) 0:00:42.569 ********** 2026-04-17 00:56:46.091474 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091479 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091483 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091534 | orchestrator | 2026-04-17 00:56:46.091539 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-17 00:56:46.091543 | orchestrator | Friday 17 April 2026 00:54:37 +0000 (0:00:00.495) 0:00:43.065 ********** 2026-04-17 00:56:46.091547 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091551 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091555 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091560 | orchestrator | 2026-04-17 00:56:46.091564 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-17 00:56:46.091568 | orchestrator | Friday 17 April 2026 00:54:38 +0000 (0:00:00.487) 0:00:43.552 ********** 2026-04-17 00:56:46.091573 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091586 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091591 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091595 | orchestrator | 2026-04-17 00:56:46.091599 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-17 00:56:46.091603 | orchestrator | Friday 17 April 2026 00:54:38 +0000 (0:00:00.896) 0:00:44.448 ********** 2026-04-17 00:56:46.091608 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091612 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091616 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.091621 | orchestrator | 2026-04-17 00:56:46.091625 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-17 00:56:46.091629 | orchestrator | Friday 17 April 2026 00:54:39 +0000 (0:00:00.513) 0:00:44.962 ********** 2026-04-17 00:56:46.091636 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091641 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091645 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091650 | orchestrator | 2026-04-17 00:56:46.091654 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 00:56:46.091658 | orchestrator | Friday 17 April 2026 00:54:39 +0000 (0:00:00.453) 0:00:45.416 ********** 2026-04-17 00:56:46.091663 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091667 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091671 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-17 00:56:46.091676 | orchestrator | 2026-04-17 00:56:46.091680 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-17 00:56:46.091684 | orchestrator | Friday 17 April 2026 00:54:40 +0000 (0:00:00.407) 0:00:45.824 ********** 2026-04-17 00:56:46.091694 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091698 | orchestrator | 2026-04-17 00:56:46.091703 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-17 00:56:46.091707 | orchestrator | Friday 17 April 2026 00:54:50 +0000 (0:00:10.242) 0:00:56.067 ********** 2026-04-17 00:56:46.091712 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091716 | orchestrator | 2026-04-17 00:56:46.091721 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 00:56:46.091728 | orchestrator | Friday 17 April 2026 00:54:50 +0000 (0:00:00.267) 0:00:56.334 ********** 2026-04-17 00:56:46.091733 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091737 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091741 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091745 | orchestrator | 2026-04-17 00:56:46.091750 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-17 00:56:46.091754 | orchestrator | Friday 17 April 2026 00:54:51 +0000 (0:00:00.795) 0:00:57.129 ********** 2026-04-17 00:56:46.091759 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091763 | orchestrator | 2026-04-17 00:56:46.091767 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-17 00:56:46.091772 | orchestrator | Friday 17 April 2026 00:54:59 +0000 (0:00:07.410) 0:01:04.539 ********** 2026-04-17 00:56:46.091776 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091781 | orchestrator | 2026-04-17 00:56:46.091785 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-17 00:56:46.091789 | orchestrator | Friday 17 April 2026 00:55:00 +0000 (0:00:01.561) 0:01:06.100 ********** 2026-04-17 00:56:46.091793 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.091798 | orchestrator | 2026-04-17 00:56:46.091802 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-17 00:56:46.091806 | orchestrator | Friday 17 April 2026 00:55:03 +0000 (0:00:02.663) 0:01:08.764 ********** 2026-04-17 00:56:46.091810 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.091815 | orchestrator | 2026-04-17 00:56:46.091819 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-17 00:56:46.091823 | orchestrator | Friday 17 April 2026 00:55:03 +0000 (0:00:00.166) 0:01:08.930 ********** 2026-04-17 00:56:46.091828 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091832 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.091836 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.091841 | orchestrator | 2026-04-17 00:56:46.091845 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-17 00:56:46.091849 | orchestrator | Friday 17 April 2026 00:55:03 +0000 (0:00:00.314) 0:01:09.245 ********** 2026-04-17 00:56:46.091854 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.091858 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:46.091863 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:46.091867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-17 00:56:46.091871 | orchestrator | 2026-04-17 00:56:46.091875 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 00:56:46.091879 | orchestrator | skipping: no hosts matched 2026-04-17 00:56:46.091884 | orchestrator | 2026-04-17 00:56:46.091888 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 00:56:46.091892 | orchestrator | 2026-04-17 00:56:46.091896 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 00:56:46.091901 | orchestrator | Friday 17 April 2026 00:55:04 +0000 (0:00:00.327) 0:01:09.572 ********** 2026-04-17 00:56:46.091905 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:56:46.091909 | orchestrator | 2026-04-17 00:56:46.091913 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 00:56:46.091918 | orchestrator | Friday 17 April 2026 00:55:19 +0000 (0:00:15.212) 0:01:24.784 ********** 2026-04-17 00:56:46.091922 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091929 | orchestrator | 2026-04-17 00:56:46.091934 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 00:56:46.091938 | orchestrator | Friday 17 April 2026 00:55:34 +0000 (0:00:15.539) 0:01:40.324 ********** 2026-04-17 00:56:46.091943 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.091947 | orchestrator | 2026-04-17 00:56:46.091951 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 00:56:46.091955 | orchestrator | 2026-04-17 00:56:46.091959 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 00:56:46.091963 | orchestrator | Friday 17 April 2026 00:55:37 +0000 (0:00:02.449) 0:01:42.774 ********** 2026-04-17 00:56:46.091966 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:56:46.091970 | orchestrator | 2026-04-17 00:56:46.091974 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 00:56:46.091978 | orchestrator | Friday 17 April 2026 00:55:54 +0000 (0:00:17.049) 0:01:59.823 ********** 2026-04-17 00:56:46.091981 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.091985 | orchestrator | 2026-04-17 00:56:46.091989 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 00:56:46.091992 | orchestrator | Friday 17 April 2026 00:56:10 +0000 (0:00:15.859) 0:02:15.682 ********** 2026-04-17 00:56:46.091996 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.092000 | orchestrator | 2026-04-17 00:56:46.092004 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 00:56:46.092007 | orchestrator | 2026-04-17 00:56:46.092013 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-17 00:56:46.092017 | orchestrator | Friday 17 April 2026 00:56:12 +0000 (0:00:02.255) 0:02:17.938 ********** 2026-04-17 00:56:46.092021 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.092025 | orchestrator | 2026-04-17 00:56:46.092029 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-17 00:56:46.092032 | orchestrator | Friday 17 April 2026 00:56:23 +0000 (0:00:11.431) 0:02:29.370 ********** 2026-04-17 00:56:46.092036 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.092040 | orchestrator | 2026-04-17 00:56:46.092043 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-17 00:56:46.092047 | orchestrator | Friday 17 April 2026 00:56:28 +0000 (0:00:04.603) 0:02:33.973 ********** 2026-04-17 00:56:46.092051 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.092055 | orchestrator | 2026-04-17 00:56:46.092058 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 00:56:46.092062 | orchestrator | 2026-04-17 00:56:46.092066 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 00:56:46.092069 | orchestrator | Friday 17 April 2026 00:56:31 +0000 (0:00:02.600) 0:02:36.574 ********** 2026-04-17 00:56:46.092073 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:56:46.092077 | orchestrator | 2026-04-17 00:56:46.092133 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-17 00:56:46.092138 | orchestrator | Friday 17 April 2026 00:56:31 +0000 (0:00:00.604) 0:02:37.178 ********** 2026-04-17 00:56:46.092142 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.092146 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.092150 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.092153 | orchestrator | 2026-04-17 00:56:46.092157 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-17 00:56:46.092161 | orchestrator | Friday 17 April 2026 00:56:34 +0000 (0:00:02.550) 0:02:39.728 ********** 2026-04-17 00:56:46.092165 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.092169 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.092172 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.092176 | orchestrator | 2026-04-17 00:56:46.092180 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-17 00:56:46.092184 | orchestrator | Friday 17 April 2026 00:56:36 +0000 (0:00:02.312) 0:02:42.041 ********** 2026-04-17 00:56:46.092191 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.092194 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.092198 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.092202 | orchestrator | 2026-04-17 00:56:46.092206 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-17 00:56:46.092209 | orchestrator | Friday 17 April 2026 00:56:39 +0000 (0:00:02.697) 0:02:44.739 ********** 2026-04-17 00:56:46.092213 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.092217 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.092221 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:56:46.092224 | orchestrator | 2026-04-17 00:56:46.092228 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-17 00:56:46.092232 | orchestrator | Friday 17 April 2026 00:56:41 +0000 (0:00:02.639) 0:02:47.378 ********** 2026-04-17 00:56:46.092236 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:56:46.092239 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:56:46.092243 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:56:46.092247 | orchestrator | 2026-04-17 00:56:46.092250 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 00:56:46.092254 | orchestrator | Friday 17 April 2026 00:56:44 +0000 (0:00:02.678) 0:02:50.056 ********** 2026-04-17 00:56:46.092258 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:56:46.092262 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:56:46.092265 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:56:46.092269 | orchestrator | 2026-04-17 00:56:46.092273 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:56:46.092277 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-17 00:56:46.092281 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-17 00:56:46.092287 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-17 00:56:46.092290 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-17 00:56:46.092294 | orchestrator | 2026-04-17 00:56:46.092298 | orchestrator | 2026-04-17 00:56:46.092302 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:56:46.092306 | orchestrator | Friday 17 April 2026 00:56:44 +0000 (0:00:00.216) 0:02:50.272 ********** 2026-04-17 00:56:46.092309 | orchestrator | =============================================================================== 2026-04-17 00:56:46.092313 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.26s 2026-04-17 00:56:46.092317 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.40s 2026-04-17 00:56:46.092320 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.43s 2026-04-17 00:56:46.092324 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.08s 2026-04-17 00:56:46.092328 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.24s 2026-04-17 00:56:46.092332 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.41s 2026-04-17 00:56:46.092338 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.71s 2026-04-17 00:56:46.092342 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.60s 2026-04-17 00:56:46.092346 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.18s 2026-04-17 00:56:46.092350 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.49s 2026-04-17 00:56:46.092353 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.37s 2026-04-17 00:56:46.092361 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.89s 2026-04-17 00:56:46.092365 | orchestrator | Check MariaDB service --------------------------------------------------- 2.77s 2026-04-17 00:56:46.092368 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.70s 2026-04-17 00:56:46.092386 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.68s 2026-04-17 00:56:46.092390 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2026-04-17 00:56:46.092394 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.64s 2026-04-17 00:56:46.092398 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.60s 2026-04-17 00:56:46.092404 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.55s 2026-04-17 00:56:46.092408 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.44s 2026-04-17 00:56:49.143783 | orchestrator | 2026-04-17 00:56:49 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:49.145962 | orchestrator | 2026-04-17 00:56:49 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:56:49.148558 | orchestrator | 2026-04-17 00:56:49 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:56:49.148717 | orchestrator | 2026-04-17 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:52.182872 | orchestrator | 2026-04-17 00:56:52 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:52.183705 | orchestrator | 2026-04-17 00:56:52 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:56:52.184439 | orchestrator | 2026-04-17 00:56:52 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:56:52.184451 | orchestrator | 2026-04-17 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:55.223844 | orchestrator | 2026-04-17 00:56:55 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:55.225058 | orchestrator | 2026-04-17 00:56:55 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:56:55.229203 | orchestrator | 2026-04-17 00:56:55 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:56:55.229266 | orchestrator | 2026-04-17 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:56:58.265691 | orchestrator | 2026-04-17 00:56:58 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:56:58.265801 | orchestrator | 2026-04-17 00:56:58 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:56:58.266776 | orchestrator | 2026-04-17 00:56:58 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:56:58.266827 | orchestrator | 2026-04-17 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:01.300353 | orchestrator | 2026-04-17 00:57:01 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:01.302989 | orchestrator | 2026-04-17 00:57:01 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:01.304947 | orchestrator | 2026-04-17 00:57:01 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:01.305018 | orchestrator | 2026-04-17 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:04.332045 | orchestrator | 2026-04-17 00:57:04 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:04.332280 | orchestrator | 2026-04-17 00:57:04 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:04.333152 | orchestrator | 2026-04-17 00:57:04 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:04.333181 | orchestrator | 2026-04-17 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:07.372686 | orchestrator | 2026-04-17 00:57:07 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:07.374563 | orchestrator | 2026-04-17 00:57:07 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:07.376629 | orchestrator | 2026-04-17 00:57:07 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:07.376718 | orchestrator | 2026-04-17 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:10.421447 | orchestrator | 2026-04-17 00:57:10 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:10.422128 | orchestrator | 2026-04-17 00:57:10 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:10.424506 | orchestrator | 2026-04-17 00:57:10 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:10.424554 | orchestrator | 2026-04-17 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:13.454293 | orchestrator | 2026-04-17 00:57:13 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:13.455438 | orchestrator | 2026-04-17 00:57:13 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:13.456395 | orchestrator | 2026-04-17 00:57:13 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:13.456439 | orchestrator | 2026-04-17 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:16.491675 | orchestrator | 2026-04-17 00:57:16 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:16.492104 | orchestrator | 2026-04-17 00:57:16 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:16.493174 | orchestrator | 2026-04-17 00:57:16 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:16.493545 | orchestrator | 2026-04-17 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:19.536923 | orchestrator | 2026-04-17 00:57:19 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:19.538408 | orchestrator | 2026-04-17 00:57:19 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:19.540178 | orchestrator | 2026-04-17 00:57:19 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:19.540231 | orchestrator | 2026-04-17 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:22.581317 | orchestrator | 2026-04-17 00:57:22 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:22.583234 | orchestrator | 2026-04-17 00:57:22 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:22.585533 | orchestrator | 2026-04-17 00:57:22 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:22.585582 | orchestrator | 2026-04-17 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:25.631231 | orchestrator | 2026-04-17 00:57:25 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:25.632958 | orchestrator | 2026-04-17 00:57:25 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:25.636854 | orchestrator | 2026-04-17 00:57:25 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:25.636937 | orchestrator | 2026-04-17 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:28.677012 | orchestrator | 2026-04-17 00:57:28 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:28.680550 | orchestrator | 2026-04-17 00:57:28 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:28.682607 | orchestrator | 2026-04-17 00:57:28 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:28.682998 | orchestrator | 2026-04-17 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:31.725428 | orchestrator | 2026-04-17 00:57:31 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state STARTED 2026-04-17 00:57:31.726893 | orchestrator | 2026-04-17 00:57:31 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:31.728840 | orchestrator | 2026-04-17 00:57:31 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:31.728964 | orchestrator | 2026-04-17 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:34.784283 | orchestrator | 2026-04-17 00:57:34 | INFO  | Task b4eec833-9eec-4967-b999-29b39cc51c30 is in state SUCCESS 2026-04-17 00:57:34.785287 | orchestrator | 2026-04-17 00:57:34.785351 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 00:57:34.785361 | orchestrator | 2.16.14 2026-04-17 00:57:34.785369 | orchestrator | 2026-04-17 00:57:34.785376 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-17 00:57:34.785383 | orchestrator | 2026-04-17 00:57:34.785389 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-17 00:57:34.785396 | orchestrator | Friday 17 April 2026 00:55:42 +0000 (0:00:00.596) 0:00:00.596 ********** 2026-04-17 00:57:34.785402 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:57:34.785410 | orchestrator | 2026-04-17 00:57:34.785416 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-17 00:57:34.785422 | orchestrator | Friday 17 April 2026 00:55:42 +0000 (0:00:00.651) 0:00:01.247 ********** 2026-04-17 00:57:34.785429 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785435 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785442 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785448 | orchestrator | 2026-04-17 00:57:34.785456 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-17 00:57:34.785463 | orchestrator | Friday 17 April 2026 00:55:43 +0000 (0:00:00.932) 0:00:02.180 ********** 2026-04-17 00:57:34.785685 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785858 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785865 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785869 | orchestrator | 2026-04-17 00:57:34.785874 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-17 00:57:34.785888 | orchestrator | Friday 17 April 2026 00:55:44 +0000 (0:00:00.285) 0:00:02.465 ********** 2026-04-17 00:57:34.785893 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785896 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785900 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785904 | orchestrator | 2026-04-17 00:57:34.785908 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-17 00:57:34.785912 | orchestrator | Friday 17 April 2026 00:55:44 +0000 (0:00:00.764) 0:00:03.229 ********** 2026-04-17 00:57:34.785915 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785919 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785923 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785926 | orchestrator | 2026-04-17 00:57:34.785930 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-17 00:57:34.785952 | orchestrator | Friday 17 April 2026 00:55:45 +0000 (0:00:00.297) 0:00:03.527 ********** 2026-04-17 00:57:34.785956 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785960 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785964 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785967 | orchestrator | 2026-04-17 00:57:34.785971 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-17 00:57:34.785975 | orchestrator | Friday 17 April 2026 00:55:45 +0000 (0:00:00.281) 0:00:03.809 ********** 2026-04-17 00:57:34.785978 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.785982 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.785986 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.785990 | orchestrator | 2026-04-17 00:57:34.785994 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-17 00:57:34.785997 | orchestrator | Friday 17 April 2026 00:55:45 +0000 (0:00:00.287) 0:00:04.096 ********** 2026-04-17 00:57:34.786001 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786006 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786010 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786045 | orchestrator | 2026-04-17 00:57:34.786049 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-17 00:57:34.786053 | orchestrator | Friday 17 April 2026 00:55:46 +0000 (0:00:00.471) 0:00:04.568 ********** 2026-04-17 00:57:34.786192 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.786199 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.786203 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.786207 | orchestrator | 2026-04-17 00:57:34.786210 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-17 00:57:34.786214 | orchestrator | Friday 17 April 2026 00:55:46 +0000 (0:00:00.300) 0:00:04.869 ********** 2026-04-17 00:57:34.786218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:57:34.786222 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:57:34.786226 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:57:34.786229 | orchestrator | 2026-04-17 00:57:34.786233 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-17 00:57:34.786237 | orchestrator | Friday 17 April 2026 00:55:47 +0000 (0:00:00.639) 0:00:05.508 ********** 2026-04-17 00:57:34.786240 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.786244 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.786248 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.786252 | orchestrator | 2026-04-17 00:57:34.786255 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-17 00:57:34.786259 | orchestrator | Friday 17 April 2026 00:55:47 +0000 (0:00:00.410) 0:00:05.919 ********** 2026-04-17 00:57:34.786263 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:57:34.786266 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:57:34.786270 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:57:34.786274 | orchestrator | 2026-04-17 00:57:34.786277 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-17 00:57:34.786281 | orchestrator | Friday 17 April 2026 00:55:50 +0000 (0:00:03.069) 0:00:08.989 ********** 2026-04-17 00:57:34.786284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 00:57:34.786289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 00:57:34.786292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 00:57:34.786297 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786301 | orchestrator | 2026-04-17 00:57:34.786326 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-17 00:57:34.786333 | orchestrator | Friday 17 April 2026 00:55:50 +0000 (0:00:00.391) 0:00:09.380 ********** 2026-04-17 00:57:34.786347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786357 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786361 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786365 | orchestrator | 2026-04-17 00:57:34.786369 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-17 00:57:34.786372 | orchestrator | Friday 17 April 2026 00:55:51 +0000 (0:00:00.766) 0:00:10.147 ********** 2026-04-17 00:57:34.786382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.786398 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786405 | orchestrator | 2026-04-17 00:57:34.786411 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-17 00:57:34.786418 | orchestrator | Friday 17 April 2026 00:55:51 +0000 (0:00:00.160) 0:00:10.307 ********** 2026-04-17 00:57:34.786428 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd8565b72742c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-17 00:55:48.449845', 'end': '2026-04-17 00:55:48.478232', 'delta': '0:00:00.028387', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d8565b72742c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-17 00:57:34.786438 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1202ae0c4591', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-17 00:55:49.515446', 'end': '2026-04-17 00:55:49.557307', 'delta': '0:00:00.041861', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1202ae0c4591'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-17 00:57:34.786469 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '42ccd5fa2fbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-17 00:55:50.362827', 'end': '2026-04-17 00:55:50.406263', 'delta': '0:00:00.043436', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['42ccd5fa2fbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-17 00:57:34.786477 | orchestrator | 2026-04-17 00:57:34.786483 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-17 00:57:34.786489 | orchestrator | Friday 17 April 2026 00:55:52 +0000 (0:00:00.378) 0:00:10.686 ********** 2026-04-17 00:57:34.786495 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.786501 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.786506 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.786512 | orchestrator | 2026-04-17 00:57:34.786517 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-17 00:57:34.786553 | orchestrator | Friday 17 April 2026 00:55:52 +0000 (0:00:00.419) 0:00:11.105 ********** 2026-04-17 00:57:34.786559 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-17 00:57:34.786565 | orchestrator | 2026-04-17 00:57:34.786571 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-17 00:57:34.786576 | orchestrator | Friday 17 April 2026 00:55:53 +0000 (0:00:01.215) 0:00:12.320 ********** 2026-04-17 00:57:34.786582 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786588 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786594 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786599 | orchestrator | 2026-04-17 00:57:34.786605 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-17 00:57:34.786611 | orchestrator | Friday 17 April 2026 00:55:54 +0000 (0:00:00.284) 0:00:12.605 ********** 2026-04-17 00:57:34.786618 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786624 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786630 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786637 | orchestrator | 2026-04-17 00:57:34.786643 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 00:57:34.786649 | orchestrator | Friday 17 April 2026 00:55:54 +0000 (0:00:00.392) 0:00:12.997 ********** 2026-04-17 00:57:34.786654 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786658 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786662 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786666 | orchestrator | 2026-04-17 00:57:34.786669 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-17 00:57:34.786673 | orchestrator | Friday 17 April 2026 00:55:54 +0000 (0:00:00.446) 0:00:13.443 ********** 2026-04-17 00:57:34.786677 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.786681 | orchestrator | 2026-04-17 00:57:34.786684 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-17 00:57:34.786688 | orchestrator | Friday 17 April 2026 00:55:55 +0000 (0:00:00.122) 0:00:13.566 ********** 2026-04-17 00:57:34.786692 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786696 | orchestrator | 2026-04-17 00:57:34.786699 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-17 00:57:34.786703 | orchestrator | Friday 17 April 2026 00:55:55 +0000 (0:00:00.204) 0:00:13.771 ********** 2026-04-17 00:57:34.786713 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786717 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786721 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786724 | orchestrator | 2026-04-17 00:57:34.786728 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-17 00:57:34.786732 | orchestrator | Friday 17 April 2026 00:55:55 +0000 (0:00:00.266) 0:00:14.037 ********** 2026-04-17 00:57:34.786735 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786739 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786743 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786747 | orchestrator | 2026-04-17 00:57:34.786751 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-17 00:57:34.786755 | orchestrator | Friday 17 April 2026 00:55:55 +0000 (0:00:00.302) 0:00:14.340 ********** 2026-04-17 00:57:34.786758 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786762 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786766 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786770 | orchestrator | 2026-04-17 00:57:34.786773 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-17 00:57:34.786777 | orchestrator | Friday 17 April 2026 00:55:56 +0000 (0:00:00.484) 0:00:14.825 ********** 2026-04-17 00:57:34.786781 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786785 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786789 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786792 | orchestrator | 2026-04-17 00:57:34.786796 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-17 00:57:34.786800 | orchestrator | Friday 17 April 2026 00:55:56 +0000 (0:00:00.310) 0:00:15.135 ********** 2026-04-17 00:57:34.786803 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786807 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786811 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786815 | orchestrator | 2026-04-17 00:57:34.786820 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-17 00:57:34.786824 | orchestrator | Friday 17 April 2026 00:55:56 +0000 (0:00:00.300) 0:00:15.435 ********** 2026-04-17 00:57:34.786829 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786833 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786837 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786860 | orchestrator | 2026-04-17 00:57:34.786865 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-17 00:57:34.786869 | orchestrator | Friday 17 April 2026 00:55:57 +0000 (0:00:00.296) 0:00:15.732 ********** 2026-04-17 00:57:34.786874 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.786878 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.786883 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.786887 | orchestrator | 2026-04-17 00:57:34.786893 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-17 00:57:34.786900 | orchestrator | Friday 17 April 2026 00:55:57 +0000 (0:00:00.465) 0:00:16.198 ********** 2026-04-17 00:57:34.786908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e', 'dm-uuid-LVM-g9X6l0qwDIZWCpRWNEx1zkSl2Za2dKIeStxmKKanMhqLvtUuPaP0LfahY1QRB2m1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db', 'dm-uuid-LVM-1X6Uih7qPguWyuCZMmyrP2EbSIAqMcGMBcGbolZV6Jf1sLG9qibnfQkSm63AWGNe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.786991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-colrFE-4Hk0-qtHQ-x927-2e4m-YVAL-XO7LZ6', 'scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7', 'scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFHCby-2alW-5GKP-zhKf-5k5e-WAHr-CBnv39', 'scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c', 'scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9', 'scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7', 'dm-uuid-LVM-0Utkg46oKijwoDG46BuLcXixJM5j7w0mUYRqO3NpqwwYZ54MvhNbLP1SVpKLJwT4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb', 'dm-uuid-LVM-GAlEacEbi1CwcOISAVKhXFrtTEYp6ye09GTUmP11e6XNES9wmceSUAE8bF5Ro1JF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787177 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.787188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fisNcD-or7H-PZQo-LaTD-fY2d-g4Xo-sXbrTK', 'scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20', 'scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Kt5TXT-Lx50-maGB-s3l4-DCg1-ASrb-1tDoJY', 'scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128', 'scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe', 'scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787255 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.787261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198', 'dm-uuid-LVM-nCYaAaVdToTEuRn1zAv5nxcwpgFNSw2rlf27CKxFeSt93SMWQNXTDv6H9rfbdmbg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64', 'dm-uuid-LVM-cY8ffF6iH2rYYxBC9cIWQg2oZ3Bddr1zv6dOY66N7vdlGXs1o6D2tVpbz91AILgU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-17 00:57:34.787348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nfWxIo-AfkD-xidi-gWBA-bejh-6Mm2-r5fz5c', 'scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363', 'scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQNsKy-QiL1-lcEN-TCk1-59lB-PdnC-LfU0c2', 'scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06', 'scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b', 'scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-17 00:57:34.787394 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.787400 | orchestrator | 2026-04-17 00:57:34.787406 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-17 00:57:34.787412 | orchestrator | Friday 17 April 2026 00:55:58 +0000 (0:00:00.566) 0:00:16.764 ********** 2026-04-17 00:57:34.787419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e', 'dm-uuid-LVM-g9X6l0qwDIZWCpRWNEx1zkSl2Za2dKIeStxmKKanMhqLvtUuPaP0LfahY1QRB2m1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db', 'dm-uuid-LVM-1X6Uih7qPguWyuCZMmyrP2EbSIAqMcGMBcGbolZV6Jf1sLG9qibnfQkSm63AWGNe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7', 'dm-uuid-LVM-0Utkg46oKijwoDG46BuLcXixJM5j7w0mUYRqO3NpqwwYZ54MvhNbLP1SVpKLJwT4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787621 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb', 'dm-uuid-LVM-GAlEacEbi1CwcOISAVKhXFrtTEYp6ye09GTUmP11e6XNES9wmceSUAE8bF5Ro1JF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6cf65b8-cccd-43e0-af7e-f41bbf3c7356-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787643 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e-osd--block--2bf72114--67c4--59b2--99b4--0dc6e46ccf1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-colrFE-4Hk0-qtHQ-x927-2e4m-YVAL-XO7LZ6', 'scsi-0QEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7', 'scsi-SQEMU_QEMU_HARDDISK_fef13603-3987-4653-89a2-a4e711571ea7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db-osd--block--ecb05008--8fcc--5a4f--bdd9--0d58d51e77db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QFHCby-2alW-5GKP-zhKf-5k5e-WAHr-CBnv39', 'scsi-0QEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c', 'scsi-SQEMU_QEMU_HARDDISK_0d637dae-6e45-402a-82ea-09e5e6b1641c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9', 'scsi-SQEMU_QEMU_HARDDISK_bde58240-ae36-45ef-aa17-191037945ea9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787724 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.787729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_18663bd2-cd83-4de0-86d3-64ee8af634cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f135813a--7de6--5823--bba0--0d89f58fd8f7-osd--block--f135813a--7de6--5823--bba0--0d89f58fd8f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fisNcD-or7H-PZQo-LaTD-fY2d-g4Xo-sXbrTK', 'scsi-0QEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20', 'scsi-SQEMU_QEMU_HARDDISK_7da9734b-be35-484c-b986-e25152d7af20'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198', 'dm-uuid-LVM-nCYaAaVdToTEuRn1zAv5nxcwpgFNSw2rlf27CKxFeSt93SMWQNXTDv6H9rfbdmbg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--96c1a302--a68f--51af--8cb0--5deb1c72c0bb-osd--block--96c1a302--a68f--51af--8cb0--5deb1c72c0bb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Kt5TXT-Lx50-maGB-s3l4-DCg1-ASrb-1tDoJY', 'scsi-0QEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128', 'scsi-SQEMU_QEMU_HARDDISK_cf4610dd-7a79-47aa-aaad-c27237a9a128'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787785 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64', 'dm-uuid-LVM-cY8ffF6iH2rYYxBC9cIWQg2oZ3Bddr1zv6dOY66N7vdlGXs1o6D2tVpbz91AILgU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe', 'scsi-SQEMU_QEMU_HARDDISK_e49fa4cf-cf8d-4b96-9e62-961cb10cabfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787805 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.787809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787842 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c01abd8-1868-4e11-b3b4-646d408eb2d6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d097a065--5c07--563d--9f82--653f6f04c198-osd--block--d097a065--5c07--563d--9f82--653f6f04c198'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nfWxIo-AfkD-xidi-gWBA-bejh-6Mm2-r5fz5c', 'scsi-0QEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363', 'scsi-SQEMU_QEMU_HARDDISK_67bd38c1-9345-4e78-a265-9243ac6ca363'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--037810f1--d9a1--54dd--a4a8--d143a432af64-osd--block--037810f1--d9a1--54dd--a4a8--d143a432af64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQNsKy-QiL1-lcEN-TCk1-59lB-PdnC-LfU0c2', 'scsi-0QEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06', 'scsi-SQEMU_QEMU_HARDDISK_9d6c755c-cc87-45a9-ab8c-3b8d21ca4f06'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b', 'scsi-SQEMU_QEMU_HARDDISK_0b492bf7-a5f9-4844-b9bb-c2ed5f2b6b7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-17-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-17 00:57:34.787888 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.787892 | orchestrator | 2026-04-17 00:57:34.787896 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-17 00:57:34.787900 | orchestrator | Friday 17 April 2026 00:55:58 +0000 (0:00:00.586) 0:00:17.351 ********** 2026-04-17 00:57:34.787904 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.787908 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.787912 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.787916 | orchestrator | 2026-04-17 00:57:34.787920 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-17 00:57:34.787924 | orchestrator | Friday 17 April 2026 00:55:59 +0000 (0:00:00.709) 0:00:18.061 ********** 2026-04-17 00:57:34.787928 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.787932 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.787936 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.787940 | orchestrator | 2026-04-17 00:57:34.787944 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 00:57:34.787950 | orchestrator | Friday 17 April 2026 00:56:00 +0000 (0:00:00.444) 0:00:18.506 ********** 2026-04-17 00:57:34.787954 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.787958 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.787962 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.787966 | orchestrator | 2026-04-17 00:57:34.787971 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 00:57:34.787975 | orchestrator | Friday 17 April 2026 00:56:00 +0000 (0:00:00.713) 0:00:19.220 ********** 2026-04-17 00:57:34.787979 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.787983 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.787987 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.787991 | orchestrator | 2026-04-17 00:57:34.787995 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-17 00:57:34.787999 | orchestrator | Friday 17 April 2026 00:56:01 +0000 (0:00:00.269) 0:00:19.490 ********** 2026-04-17 00:57:34.788003 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788007 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788010 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788017 | orchestrator | 2026-04-17 00:57:34.788021 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-17 00:57:34.788025 | orchestrator | Friday 17 April 2026 00:56:01 +0000 (0:00:00.380) 0:00:19.871 ********** 2026-04-17 00:57:34.788029 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788033 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788037 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788041 | orchestrator | 2026-04-17 00:57:34.788045 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-17 00:57:34.788048 | orchestrator | Friday 17 April 2026 00:56:01 +0000 (0:00:00.495) 0:00:20.366 ********** 2026-04-17 00:57:34.788052 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-17 00:57:34.788057 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-17 00:57:34.788060 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-17 00:57:34.788064 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-17 00:57:34.788068 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-17 00:57:34.788073 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-17 00:57:34.788077 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-17 00:57:34.788080 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-17 00:57:34.788084 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-17 00:57:34.788089 | orchestrator | 2026-04-17 00:57:34.788093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-17 00:57:34.788097 | orchestrator | Friday 17 April 2026 00:56:02 +0000 (0:00:00.847) 0:00:21.214 ********** 2026-04-17 00:57:34.788101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-17 00:57:34.788105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-17 00:57:34.788109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-17 00:57:34.788114 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-17 00:57:34.788122 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-17 00:57:34.788126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-17 00:57:34.788129 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788133 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-17 00:57:34.788137 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-17 00:57:34.788141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-17 00:57:34.788145 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788149 | orchestrator | 2026-04-17 00:57:34.788153 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-17 00:57:34.788157 | orchestrator | Friday 17 April 2026 00:56:03 +0000 (0:00:00.360) 0:00:21.575 ********** 2026-04-17 00:57:34.788161 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 00:57:34.788165 | orchestrator | 2026-04-17 00:57:34.788169 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-17 00:57:34.788173 | orchestrator | Friday 17 April 2026 00:56:03 +0000 (0:00:00.696) 0:00:22.271 ********** 2026-04-17 00:57:34.788181 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788185 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788189 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788193 | orchestrator | 2026-04-17 00:57:34.788197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-17 00:57:34.788201 | orchestrator | Friday 17 April 2026 00:56:04 +0000 (0:00:00.308) 0:00:22.580 ********** 2026-04-17 00:57:34.788205 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788209 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788216 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788220 | orchestrator | 2026-04-17 00:57:34.788224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-17 00:57:34.788228 | orchestrator | Friday 17 April 2026 00:56:04 +0000 (0:00:00.317) 0:00:22.898 ********** 2026-04-17 00:57:34.788232 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788236 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788240 | orchestrator | skipping: [testbed-node-5] 2026-04-17 00:57:34.788243 | orchestrator | 2026-04-17 00:57:34.788248 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-17 00:57:34.788251 | orchestrator | Friday 17 April 2026 00:56:04 +0000 (0:00:00.300) 0:00:23.198 ********** 2026-04-17 00:57:34.788255 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.788259 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.788263 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.788266 | orchestrator | 2026-04-17 00:57:34.788271 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-17 00:57:34.788274 | orchestrator | Friday 17 April 2026 00:56:05 +0000 (0:00:00.545) 0:00:23.744 ********** 2026-04-17 00:57:34.788283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:57:34.788290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:57:34.788296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:57:34.788302 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788308 | orchestrator | 2026-04-17 00:57:34.788313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-17 00:57:34.788319 | orchestrator | Friday 17 April 2026 00:56:05 +0000 (0:00:00.361) 0:00:24.105 ********** 2026-04-17 00:57:34.788325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:57:34.788330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:57:34.788336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:57:34.788341 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788347 | orchestrator | 2026-04-17 00:57:34.788352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-17 00:57:34.788358 | orchestrator | Friday 17 April 2026 00:56:06 +0000 (0:00:00.369) 0:00:24.474 ********** 2026-04-17 00:57:34.788364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-17 00:57:34.788371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-17 00:57:34.788378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-17 00:57:34.788384 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788391 | orchestrator | 2026-04-17 00:57:34.788397 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-17 00:57:34.788403 | orchestrator | Friday 17 April 2026 00:56:06 +0000 (0:00:00.388) 0:00:24.863 ********** 2026-04-17 00:57:34.788410 | orchestrator | ok: [testbed-node-3] 2026-04-17 00:57:34.788415 | orchestrator | ok: [testbed-node-4] 2026-04-17 00:57:34.788419 | orchestrator | ok: [testbed-node-5] 2026-04-17 00:57:34.788423 | orchestrator | 2026-04-17 00:57:34.788427 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-17 00:57:34.788431 | orchestrator | Friday 17 April 2026 00:56:06 +0000 (0:00:00.318) 0:00:25.181 ********** 2026-04-17 00:57:34.788435 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-17 00:57:34.788439 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-17 00:57:34.788443 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-17 00:57:34.788447 | orchestrator | 2026-04-17 00:57:34.788451 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-17 00:57:34.788455 | orchestrator | Friday 17 April 2026 00:56:07 +0000 (0:00:00.489) 0:00:25.670 ********** 2026-04-17 00:57:34.788459 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:57:34.788466 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:57:34.788480 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:57:34.788489 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 00:57:34.788496 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 00:57:34.788501 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 00:57:34.788507 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 00:57:34.788514 | orchestrator | 2026-04-17 00:57:34.788567 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-17 00:57:34.788577 | orchestrator | Friday 17 April 2026 00:56:08 +0000 (0:00:00.984) 0:00:26.655 ********** 2026-04-17 00:57:34.788585 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-17 00:57:34.788591 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-17 00:57:34.788597 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-17 00:57:34.788603 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-17 00:57:34.788609 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-17 00:57:34.788615 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-17 00:57:34.788628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-17 00:57:34.788635 | orchestrator | 2026-04-17 00:57:34.788640 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-17 00:57:34.788647 | orchestrator | Friday 17 April 2026 00:56:10 +0000 (0:00:01.878) 0:00:28.533 ********** 2026-04-17 00:57:34.788652 | orchestrator | skipping: [testbed-node-3] 2026-04-17 00:57:34.788658 | orchestrator | skipping: [testbed-node-4] 2026-04-17 00:57:34.788664 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-17 00:57:34.788669 | orchestrator | 2026-04-17 00:57:34.788676 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-17 00:57:34.788683 | orchestrator | Friday 17 April 2026 00:56:10 +0000 (0:00:00.380) 0:00:28.913 ********** 2026-04-17 00:57:34.788691 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:57:34.788704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:57:34.788708 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:57:34.788712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:57:34.788716 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-17 00:57:34.788720 | orchestrator | 2026-04-17 00:57:34.788724 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-17 00:57:34.788733 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:37.572) 0:01:06.486 ********** 2026-04-17 00:57:34.788737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788749 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788753 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788760 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-17 00:57:34.788764 | orchestrator | 2026-04-17 00:57:34.788768 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-17 00:57:34.788772 | orchestrator | Friday 17 April 2026 00:57:06 +0000 (0:00:18.231) 0:01:24.718 ********** 2026-04-17 00:57:34.788776 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788780 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788784 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788791 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788795 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788798 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-17 00:57:34.788802 | orchestrator | 2026-04-17 00:57:34.788806 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-17 00:57:34.788810 | orchestrator | Friday 17 April 2026 00:57:15 +0000 (0:00:09.181) 0:01:33.900 ********** 2026-04-17 00:57:34.788814 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788818 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788821 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788829 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788837 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788845 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788849 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788853 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788857 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788861 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788865 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788869 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788873 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788877 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-17 00:57:34.788880 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-17 00:57:34.788889 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-17 00:57:34.788895 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-17 00:57:34.788900 | orchestrator | 2026-04-17 00:57:34.788906 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:57:34.788912 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-17 00:57:34.788918 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-17 00:57:34.788923 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-17 00:57:34.788929 | orchestrator | 2026-04-17 00:57:34.788935 | orchestrator | 2026-04-17 00:57:34.788941 | orchestrator | 2026-04-17 00:57:34.788946 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:57:34.788956 | orchestrator | Friday 17 April 2026 00:57:33 +0000 (0:00:18.149) 0:01:52.050 ********** 2026-04-17 00:57:34.788963 | orchestrator | =============================================================================== 2026-04-17 00:57:34.788969 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.57s 2026-04-17 00:57:34.788975 | orchestrator | generate keys ---------------------------------------------------------- 18.23s 2026-04-17 00:57:34.788980 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.15s 2026-04-17 00:57:34.788986 | orchestrator | get keys from monitors -------------------------------------------------- 9.18s 2026-04-17 00:57:34.788992 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.07s 2026-04-17 00:57:34.788997 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.88s 2026-04-17 00:57:34.789004 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.22s 2026-04-17 00:57:34.789010 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2026-04-17 00:57:34.789017 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.93s 2026-04-17 00:57:34.789022 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-04-17 00:57:34.789028 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2026-04-17 00:57:34.789034 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.76s 2026-04-17 00:57:34.789040 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2026-04-17 00:57:34.789046 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-04-17 00:57:34.789051 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-04-17 00:57:34.789057 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-04-17 00:57:34.789062 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-04-17 00:57:34.789068 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2026-04-17 00:57:34.789073 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2026-04-17 00:57:34.789080 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.55s 2026-04-17 00:57:34.796731 | orchestrator | 2026-04-17 00:57:34 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:34.799900 | orchestrator | 2026-04-17 00:57:34 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:34.799978 | orchestrator | 2026-04-17 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:37.846963 | orchestrator | 2026-04-17 00:57:37 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:37.847054 | orchestrator | 2026-04-17 00:57:37 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:37.847060 | orchestrator | 2026-04-17 00:57:37 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:37.847065 | orchestrator | 2026-04-17 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:40.897740 | orchestrator | 2026-04-17 00:57:40 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:40.898938 | orchestrator | 2026-04-17 00:57:40 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:40.900688 | orchestrator | 2026-04-17 00:57:40 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:40.900732 | orchestrator | 2026-04-17 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:43.940353 | orchestrator | 2026-04-17 00:57:43 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:43.942072 | orchestrator | 2026-04-17 00:57:43 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:43.943745 | orchestrator | 2026-04-17 00:57:43 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:43.944221 | orchestrator | 2026-04-17 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:46.980039 | orchestrator | 2026-04-17 00:57:46 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:46.981160 | orchestrator | 2026-04-17 00:57:46 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:46.982441 | orchestrator | 2026-04-17 00:57:46 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:46.982492 | orchestrator | 2026-04-17 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:50.030391 | orchestrator | 2026-04-17 00:57:50 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:50.031795 | orchestrator | 2026-04-17 00:57:50 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:50.034148 | orchestrator | 2026-04-17 00:57:50 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:50.034215 | orchestrator | 2026-04-17 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:53.080890 | orchestrator | 2026-04-17 00:57:53 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:53.082192 | orchestrator | 2026-04-17 00:57:53 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:53.083797 | orchestrator | 2026-04-17 00:57:53 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:53.083843 | orchestrator | 2026-04-17 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:56.135449 | orchestrator | 2026-04-17 00:57:56 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:56.137434 | orchestrator | 2026-04-17 00:57:56 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:56.140047 | orchestrator | 2026-04-17 00:57:56 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:56.140110 | orchestrator | 2026-04-17 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:57:59.209464 | orchestrator | 2026-04-17 00:57:59 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:57:59.211358 | orchestrator | 2026-04-17 00:57:59 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:57:59.213864 | orchestrator | 2026-04-17 00:57:59 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:57:59.214257 | orchestrator | 2026-04-17 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:02.261203 | orchestrator | 2026-04-17 00:58:02 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:02.264902 | orchestrator | 2026-04-17 00:58:02 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:02.266729 | orchestrator | 2026-04-17 00:58:02 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:58:02.266812 | orchestrator | 2026-04-17 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:05.305701 | orchestrator | 2026-04-17 00:58:05 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:05.307735 | orchestrator | 2026-04-17 00:58:05 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:05.309455 | orchestrator | 2026-04-17 00:58:05 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:58:05.309493 | orchestrator | 2026-04-17 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:08.370365 | orchestrator | 2026-04-17 00:58:08 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:08.371124 | orchestrator | 2026-04-17 00:58:08 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:08.372036 | orchestrator | 2026-04-17 00:58:08 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:58:08.372528 | orchestrator | 2026-04-17 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:11.419026 | orchestrator | 2026-04-17 00:58:11 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:11.421342 | orchestrator | 2026-04-17 00:58:11 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:11.422707 | orchestrator | 2026-04-17 00:58:11 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state STARTED 2026-04-17 00:58:11.422764 | orchestrator | 2026-04-17 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:14.467241 | orchestrator | 2026-04-17 00:58:14 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:14.468077 | orchestrator | 2026-04-17 00:58:14 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:14.470198 | orchestrator | 2026-04-17 00:58:14 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:14.471381 | orchestrator | 2026-04-17 00:58:14 | INFO  | Task 13c38d33-39fe-4f66-9f0b-f05b03dfcbc1 is in state SUCCESS 2026-04-17 00:58:14.471527 | orchestrator | 2026-04-17 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:17.506455 | orchestrator | 2026-04-17 00:58:17 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:17.508255 | orchestrator | 2026-04-17 00:58:17 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:17.510088 | orchestrator | 2026-04-17 00:58:17 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:17.510156 | orchestrator | 2026-04-17 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:20.547441 | orchestrator | 2026-04-17 00:58:20 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:20.549585 | orchestrator | 2026-04-17 00:58:20 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:20.552603 | orchestrator | 2026-04-17 00:58:20 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:20.552779 | orchestrator | 2026-04-17 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:23.606633 | orchestrator | 2026-04-17 00:58:23 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:23.610069 | orchestrator | 2026-04-17 00:58:23 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:23.612950 | orchestrator | 2026-04-17 00:58:23 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:23.613725 | orchestrator | 2026-04-17 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:26.663613 | orchestrator | 2026-04-17 00:58:26 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state STARTED 2026-04-17 00:58:26.667771 | orchestrator | 2026-04-17 00:58:26 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:26.670548 | orchestrator | 2026-04-17 00:58:26 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:26.670622 | orchestrator | 2026-04-17 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:29.711979 | orchestrator | 2026-04-17 00:58:29 | INFO  | Task a8415be3-ca85-4a50-ab70-3502a5ff68ed is in state SUCCESS 2026-04-17 00:58:29.712806 | orchestrator | 2026-04-17 00:58:29.712847 | orchestrator | 2026-04-17 00:58:29.712854 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-17 00:58:29.712859 | orchestrator | 2026-04-17 00:58:29.712864 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-17 00:58:29.712868 | orchestrator | Friday 17 April 2026 00:57:37 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-04-17 00:58:29.712873 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-17 00:58:29.712878 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712882 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712886 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 00:58:29.712890 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712894 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-17 00:58:29.712898 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-17 00:58:29.712902 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-17 00:58:29.712905 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-17 00:58:29.712909 | orchestrator | 2026-04-17 00:58:29.712913 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-17 00:58:29.712916 | orchestrator | Friday 17 April 2026 00:57:42 +0000 (0:00:05.627) 0:00:05.908 ********** 2026-04-17 00:58:29.712920 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-17 00:58:29.712936 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712940 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712943 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 00:58:29.712964 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.712968 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-17 00:58:29.712971 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-17 00:58:29.712975 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-17 00:58:29.712979 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-17 00:58:29.712983 | orchestrator | 2026-04-17 00:58:29.712986 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-17 00:58:29.712990 | orchestrator | Friday 17 April 2026 00:57:47 +0000 (0:00:04.231) 0:00:10.140 ********** 2026-04-17 00:58:29.712995 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-17 00:58:29.712999 | orchestrator | 2026-04-17 00:58:29.713003 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-17 00:58:29.713007 | orchestrator | Friday 17 April 2026 00:57:48 +0000 (0:00:00.975) 0:00:11.115 ********** 2026-04-17 00:58:29.713011 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-17 00:58:29.713015 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713019 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713023 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 00:58:29.713026 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713030 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-17 00:58:29.713034 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-17 00:58:29.713038 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-17 00:58:29.713041 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-17 00:58:29.713045 | orchestrator | 2026-04-17 00:58:29.713049 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-17 00:58:29.713053 | orchestrator | Friday 17 April 2026 00:58:01 +0000 (0:00:12.918) 0:00:24.033 ********** 2026-04-17 00:58:29.713056 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-17 00:58:29.713060 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-17 00:58:29.713064 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-17 00:58:29.713068 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-17 00:58:29.713080 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-17 00:58:29.713133 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-17 00:58:29.713138 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-17 00:58:29.713142 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-17 00:58:29.713146 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-17 00:58:29.713150 | orchestrator | 2026-04-17 00:58:29.713154 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-17 00:58:29.713157 | orchestrator | Friday 17 April 2026 00:58:04 +0000 (0:00:03.251) 0:00:27.285 ********** 2026-04-17 00:58:29.713162 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-17 00:58:29.713171 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713175 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713178 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 00:58:29.713182 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-17 00:58:29.713186 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-17 00:58:29.713190 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-17 00:58:29.713194 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-17 00:58:29.713197 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-17 00:58:29.713201 | orchestrator | 2026-04-17 00:58:29.713205 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:58:29.713212 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:58:29.713356 | orchestrator | 2026-04-17 00:58:29.713360 | orchestrator | 2026-04-17 00:58:29.713364 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:58:29.713368 | orchestrator | Friday 17 April 2026 00:58:11 +0000 (0:00:06.967) 0:00:34.253 ********** 2026-04-17 00:58:29.713371 | orchestrator | =============================================================================== 2026-04-17 00:58:29.713375 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.92s 2026-04-17 00:58:29.713379 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.97s 2026-04-17 00:58:29.713382 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.63s 2026-04-17 00:58:29.713386 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.23s 2026-04-17 00:58:29.713390 | orchestrator | Check if target directories exist --------------------------------------- 3.25s 2026-04-17 00:58:29.713393 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2026-04-17 00:58:29.713397 | orchestrator | 2026-04-17 00:58:29.713401 | orchestrator | 2026-04-17 00:58:29.713405 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:58:29.713408 | orchestrator | 2026-04-17 00:58:29.713412 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:58:29.713416 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.350) 0:00:00.351 ********** 2026-04-17 00:58:29.713449 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.713454 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.713458 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.713461 | orchestrator | 2026-04-17 00:58:29.713465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:58:29.713469 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.285) 0:00:00.636 ********** 2026-04-17 00:58:29.713473 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-17 00:58:29.713505 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-17 00:58:29.713510 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-17 00:58:29.713514 | orchestrator | 2026-04-17 00:58:29.713518 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-17 00:58:29.713522 | orchestrator | 2026-04-17 00:58:29.713526 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 00:58:29.713529 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.325) 0:00:00.961 ********** 2026-04-17 00:58:29.713533 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:58:29.713537 | orchestrator | 2026-04-17 00:58:29.713541 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-17 00:58:29.713544 | orchestrator | Friday 17 April 2026 00:56:49 +0000 (0:00:00.603) 0:00:01.565 ********** 2026-04-17 00:58:29.713567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.713574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.713609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.713614 | orchestrator | 2026-04-17 00:58:29.713618 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-17 00:58:29.713621 | orchestrator | Friday 17 April 2026 00:56:50 +0000 (0:00:01.535) 0:00:03.100 ********** 2026-04-17 00:58:29.713625 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.713629 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.713633 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.713637 | orchestrator | 2026-04-17 00:58:29.713640 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 00:58:29.713644 | orchestrator | Friday 17 April 2026 00:56:51 +0000 (0:00:00.295) 0:00:03.395 ********** 2026-04-17 00:58:29.713648 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 00:58:29.713693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 00:58:29.713699 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 00:58:29.713704 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 00:58:29.713710 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 00:58:29.713717 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 00:58:29.713724 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-17 00:58:29.713730 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 00:58:29.713741 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 00:58:29.713747 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 00:58:29.713753 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 00:58:29.713759 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 00:58:29.713765 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 00:58:29.713771 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 00:58:29.713777 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-17 00:58:29.713783 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 00:58:29.713789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-17 00:58:29.713795 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-17 00:58:29.713802 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-17 00:58:29.713808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-17 00:58:29.713814 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-17 00:58:29.713820 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-17 00:58:29.713831 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-17 00:58:29.713838 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-17 00:58:29.713845 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-17 00:58:29.713854 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-17 00:58:29.713861 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-17 00:58:29.713867 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-17 00:58:29.713874 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-17 00:58:29.713882 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-17 00:58:29.713887 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-17 00:58:29.713891 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-17 00:58:29.713901 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-17 00:58:29.713908 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-17 00:58:29.713913 | orchestrator | 2026-04-17 00:58:29.713921 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.713927 | orchestrator | Friday 17 April 2026 00:56:51 +0000 (0:00:00.668) 0:00:04.063 ********** 2026-04-17 00:58:29.713933 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.713944 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.713950 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.713955 | orchestrator | 2026-04-17 00:58:29.713961 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.713967 | orchestrator | Friday 17 April 2026 00:56:52 +0000 (0:00:00.483) 0:00:04.547 ********** 2026-04-17 00:58:29.713972 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.713978 | orchestrator | 2026-04-17 00:58:29.713983 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.713988 | orchestrator | Friday 17 April 2026 00:56:52 +0000 (0:00:00.141) 0:00:04.689 ********** 2026-04-17 00:58:29.713993 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.713998 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714004 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714009 | orchestrator | 2026-04-17 00:58:29.714062 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714071 | orchestrator | Friday 17 April 2026 00:56:52 +0000 (0:00:00.269) 0:00:04.959 ********** 2026-04-17 00:58:29.714077 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714083 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714089 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714095 | orchestrator | 2026-04-17 00:58:29.714101 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714106 | orchestrator | Friday 17 April 2026 00:56:53 +0000 (0:00:00.309) 0:00:05.268 ********** 2026-04-17 00:58:29.714112 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714118 | orchestrator | 2026-04-17 00:58:29.714124 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714129 | orchestrator | Friday 17 April 2026 00:56:53 +0000 (0:00:00.123) 0:00:05.392 ********** 2026-04-17 00:58:29.714135 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714141 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714147 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714153 | orchestrator | 2026-04-17 00:58:29.714158 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714164 | orchestrator | Friday 17 April 2026 00:56:53 +0000 (0:00:00.420) 0:00:05.812 ********** 2026-04-17 00:58:29.714170 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714175 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714181 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714187 | orchestrator | 2026-04-17 00:58:29.714192 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714198 | orchestrator | Friday 17 April 2026 00:56:53 +0000 (0:00:00.281) 0:00:06.094 ********** 2026-04-17 00:58:29.714204 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714209 | orchestrator | 2026-04-17 00:58:29.714215 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714221 | orchestrator | Friday 17 April 2026 00:56:54 +0000 (0:00:00.114) 0:00:06.208 ********** 2026-04-17 00:58:29.714227 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714233 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714239 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714245 | orchestrator | 2026-04-17 00:58:29.714251 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714268 | orchestrator | Friday 17 April 2026 00:56:54 +0000 (0:00:00.268) 0:00:06.477 ********** 2026-04-17 00:58:29.714275 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714281 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714287 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714298 | orchestrator | 2026-04-17 00:58:29.714306 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714313 | orchestrator | Friday 17 April 2026 00:56:54 +0000 (0:00:00.297) 0:00:06.775 ********** 2026-04-17 00:58:29.714319 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714326 | orchestrator | 2026-04-17 00:58:29.714341 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714348 | orchestrator | Friday 17 April 2026 00:56:54 +0000 (0:00:00.123) 0:00:06.898 ********** 2026-04-17 00:58:29.714357 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714369 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714376 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714383 | orchestrator | 2026-04-17 00:58:29.714389 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714395 | orchestrator | Friday 17 April 2026 00:56:55 +0000 (0:00:00.436) 0:00:07.334 ********** 2026-04-17 00:58:29.714401 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714408 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714414 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714420 | orchestrator | 2026-04-17 00:58:29.714426 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714432 | orchestrator | Friday 17 April 2026 00:56:55 +0000 (0:00:00.285) 0:00:07.620 ********** 2026-04-17 00:58:29.714437 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714443 | orchestrator | 2026-04-17 00:58:29.714464 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714478 | orchestrator | Friday 17 April 2026 00:56:55 +0000 (0:00:00.113) 0:00:07.733 ********** 2026-04-17 00:58:29.714484 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714490 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714495 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714501 | orchestrator | 2026-04-17 00:58:29.714507 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714519 | orchestrator | Friday 17 April 2026 00:56:55 +0000 (0:00:00.276) 0:00:08.010 ********** 2026-04-17 00:58:29.714525 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714531 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714537 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714542 | orchestrator | 2026-04-17 00:58:29.714548 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714555 | orchestrator | Friday 17 April 2026 00:56:56 +0000 (0:00:00.440) 0:00:08.450 ********** 2026-04-17 00:58:29.714561 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714566 | orchestrator | 2026-04-17 00:58:29.714572 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714578 | orchestrator | Friday 17 April 2026 00:56:56 +0000 (0:00:00.124) 0:00:08.575 ********** 2026-04-17 00:58:29.714584 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714590 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714595 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714601 | orchestrator | 2026-04-17 00:58:29.714607 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714612 | orchestrator | Friday 17 April 2026 00:56:56 +0000 (0:00:00.298) 0:00:08.874 ********** 2026-04-17 00:58:29.714619 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714626 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714633 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714638 | orchestrator | 2026-04-17 00:58:29.714644 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714649 | orchestrator | Friday 17 April 2026 00:56:57 +0000 (0:00:00.304) 0:00:09.178 ********** 2026-04-17 00:58:29.714686 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714692 | orchestrator | 2026-04-17 00:58:29.714698 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714703 | orchestrator | Friday 17 April 2026 00:56:57 +0000 (0:00:00.112) 0:00:09.290 ********** 2026-04-17 00:58:29.714709 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714715 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714722 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714729 | orchestrator | 2026-04-17 00:58:29.714735 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714749 | orchestrator | Friday 17 April 2026 00:56:57 +0000 (0:00:00.291) 0:00:09.582 ********** 2026-04-17 00:58:29.714756 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714763 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714769 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714776 | orchestrator | 2026-04-17 00:58:29.714783 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714789 | orchestrator | Friday 17 April 2026 00:56:57 +0000 (0:00:00.509) 0:00:10.091 ********** 2026-04-17 00:58:29.714796 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714802 | orchestrator | 2026-04-17 00:58:29.714808 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714814 | orchestrator | Friday 17 April 2026 00:56:58 +0000 (0:00:00.125) 0:00:10.216 ********** 2026-04-17 00:58:29.714819 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714825 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714830 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714838 | orchestrator | 2026-04-17 00:58:29.714843 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714849 | orchestrator | Friday 17 April 2026 00:56:58 +0000 (0:00:00.266) 0:00:10.482 ********** 2026-04-17 00:58:29.714855 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714860 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714866 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714873 | orchestrator | 2026-04-17 00:58:29.714879 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714884 | orchestrator | Friday 17 April 2026 00:56:58 +0000 (0:00:00.335) 0:00:10.818 ********** 2026-04-17 00:58:29.714891 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714896 | orchestrator | 2026-04-17 00:58:29.714914 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.714921 | orchestrator | Friday 17 April 2026 00:56:58 +0000 (0:00:00.134) 0:00:10.953 ********** 2026-04-17 00:58:29.714928 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.714935 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.714942 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.714949 | orchestrator | 2026-04-17 00:58:29.714954 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-17 00:58:29.714961 | orchestrator | Friday 17 April 2026 00:56:59 +0000 (0:00:00.275) 0:00:11.229 ********** 2026-04-17 00:58:29.714967 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:58:29.714973 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:58:29.714979 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:58:29.714985 | orchestrator | 2026-04-17 00:58:29.714990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-17 00:58:29.714995 | orchestrator | Friday 17 April 2026 00:56:59 +0000 (0:00:00.445) 0:00:11.675 ********** 2026-04-17 00:58:29.715000 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715005 | orchestrator | 2026-04-17 00:58:29.715011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-17 00:58:29.715016 | orchestrator | Friday 17 April 2026 00:56:59 +0000 (0:00:00.116) 0:00:11.791 ********** 2026-04-17 00:58:29.715023 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715029 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715037 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715047 | orchestrator | 2026-04-17 00:58:29.715058 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-17 00:58:29.715069 | orchestrator | Friday 17 April 2026 00:56:59 +0000 (0:00:00.286) 0:00:12.077 ********** 2026-04-17 00:58:29.715079 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:58:29.715091 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:58:29.715102 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:58:29.715113 | orchestrator | 2026-04-17 00:58:29.715125 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-17 00:58:29.715148 | orchestrator | Friday 17 April 2026 00:57:01 +0000 (0:00:01.656) 0:00:13.734 ********** 2026-04-17 00:58:29.715158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 00:58:29.715176 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 00:58:29.715186 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-17 00:58:29.715197 | orchestrator | 2026-04-17 00:58:29.715208 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-17 00:58:29.715219 | orchestrator | Friday 17 April 2026 00:57:03 +0000 (0:00:02.106) 0:00:15.840 ********** 2026-04-17 00:58:29.715230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 00:58:29.715240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 00:58:29.715247 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-17 00:58:29.715258 | orchestrator | 2026-04-17 00:58:29.715269 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-17 00:58:29.715278 | orchestrator | Friday 17 April 2026 00:57:05 +0000 (0:00:01.993) 0:00:17.834 ********** 2026-04-17 00:58:29.715285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 00:58:29.715295 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 00:58:29.715305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-17 00:58:29.715314 | orchestrator | 2026-04-17 00:58:29.715324 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-17 00:58:29.715335 | orchestrator | Friday 17 April 2026 00:57:07 +0000 (0:00:01.645) 0:00:19.480 ********** 2026-04-17 00:58:29.715343 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715350 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715357 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715363 | orchestrator | 2026-04-17 00:58:29.715369 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-17 00:58:29.715375 | orchestrator | Friday 17 April 2026 00:57:07 +0000 (0:00:00.277) 0:00:19.758 ********** 2026-04-17 00:58:29.715382 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715387 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715393 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715398 | orchestrator | 2026-04-17 00:58:29.715404 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 00:58:29.715409 | orchestrator | Friday 17 April 2026 00:57:07 +0000 (0:00:00.361) 0:00:20.120 ********** 2026-04-17 00:58:29.715415 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:58:29.715420 | orchestrator | 2026-04-17 00:58:29.715426 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-17 00:58:29.715432 | orchestrator | Friday 17 April 2026 00:57:08 +0000 (0:00:00.763) 0:00:20.883 ********** 2026-04-17 00:58:29.715455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715508 | orchestrator | 2026-04-17 00:58:29.715515 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-17 00:58:29.715521 | orchestrator | Friday 17 April 2026 00:57:10 +0000 (0:00:01.431) 0:00:22.315 ********** 2026-04-17 00:58:29.715532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715544 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715560 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715584 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715590 | orchestrator | 2026-04-17 00:58:29.715596 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-17 00:58:29.715603 | orchestrator | Friday 17 April 2026 00:57:11 +0000 (0:00:00.983) 0:00:23.298 ********** 2026-04-17 00:58:29.715612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715619 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715643 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-17 00:58:29.715719 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715726 | orchestrator | 2026-04-17 00:58:29.715732 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-17 00:58:29.715738 | orchestrator | Friday 17 April 2026 00:57:12 +0000 (0:00:01.037) 0:00:24.335 ********** 2026-04-17 00:58:29.715752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-17 00:58:29.715802 | orchestrator | 2026-04-17 00:58:29.715808 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 00:58:29.715815 | orchestrator | Friday 17 April 2026 00:57:13 +0000 (0:00:01.322) 0:00:25.658 ********** 2026-04-17 00:58:29.715822 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:58:29.715828 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:58:29.715833 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:58:29.715839 | orchestrator | 2026-04-17 00:58:29.715845 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-17 00:58:29.715852 | orchestrator | Friday 17 April 2026 00:57:13 +0000 (0:00:00.439) 0:00:26.098 ********** 2026-04-17 00:58:29.715859 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:58:29.715866 | orchestrator | 2026-04-17 00:58:29.715874 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-17 00:58:29.715881 | orchestrator | Friday 17 April 2026 00:57:14 +0000 (0:00:00.738) 0:00:26.836 ********** 2026-04-17 00:58:29.715888 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:58:29.715894 | orchestrator | 2026-04-17 00:58:29.715901 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-17 00:58:29.715908 | orchestrator | Friday 17 April 2026 00:57:17 +0000 (0:00:02.428) 0:00:29.265 ********** 2026-04-17 00:58:29.715915 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:58:29.715923 | orchestrator | 2026-04-17 00:58:29.715930 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-17 00:58:29.715937 | orchestrator | Friday 17 April 2026 00:57:19 +0000 (0:00:02.545) 0:00:31.810 ********** 2026-04-17 00:58:29.715943 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:58:29.715950 | orchestrator | 2026-04-17 00:58:29.715956 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 00:58:29.715962 | orchestrator | Friday 17 April 2026 00:57:36 +0000 (0:00:16.563) 0:00:48.374 ********** 2026-04-17 00:58:29.715974 | orchestrator | 2026-04-17 00:58:29.715981 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 00:58:29.715989 | orchestrator | Friday 17 April 2026 00:57:36 +0000 (0:00:00.073) 0:00:48.447 ********** 2026-04-17 00:58:29.715996 | orchestrator | 2026-04-17 00:58:29.716004 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-17 00:58:29.716012 | orchestrator | Friday 17 April 2026 00:57:36 +0000 (0:00:00.070) 0:00:48.518 ********** 2026-04-17 00:58:29.716019 | orchestrator | 2026-04-17 00:58:29.716026 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-17 00:58:29.716034 | orchestrator | Friday 17 April 2026 00:57:36 +0000 (0:00:00.068) 0:00:48.587 ********** 2026-04-17 00:58:29.716041 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:58:29.716048 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:58:29.716056 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:58:29.716063 | orchestrator | 2026-04-17 00:58:29.716071 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:58:29.716079 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-17 00:58:29.716088 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 00:58:29.716096 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-17 00:58:29.716104 | orchestrator | 2026-04-17 00:58:29.716111 | orchestrator | 2026-04-17 00:58:29.716122 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:58:29.716129 | orchestrator | Friday 17 April 2026 00:58:29 +0000 (0:00:52.700) 0:01:41.287 ********** 2026-04-17 00:58:29.716136 | orchestrator | =============================================================================== 2026-04-17 00:58:29.716143 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.70s 2026-04-17 00:58:29.716150 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.56s 2026-04-17 00:58:29.716158 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.55s 2026-04-17 00:58:29.716165 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.43s 2026-04-17 00:58:29.716173 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.11s 2026-04-17 00:58:29.716180 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.99s 2026-04-17 00:58:29.716187 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2026-04-17 00:58:29.716194 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.65s 2026-04-17 00:58:29.716202 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.54s 2026-04-17 00:58:29.716210 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.43s 2026-04-17 00:58:29.716216 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.32s 2026-04-17 00:58:29.716222 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.04s 2026-04-17 00:58:29.716229 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.98s 2026-04-17 00:58:29.716236 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-04-17 00:58:29.716243 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-04-17 00:58:29.716250 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-04-17 00:58:29.716261 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-04-17 00:58:29.716268 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-04-17 00:58:29.716275 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-04-17 00:58:29.716290 | orchestrator | horizon : Update policy file name --------------------------------------- 0.45s 2026-04-17 00:58:29.716297 | orchestrator | 2026-04-17 00:58:29 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:29.716303 | orchestrator | 2026-04-17 00:58:29 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:29.716310 | orchestrator | 2026-04-17 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:32.755860 | orchestrator | 2026-04-17 00:58:32 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:32.757618 | orchestrator | 2026-04-17 00:58:32 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:32.757722 | orchestrator | 2026-04-17 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:35.801873 | orchestrator | 2026-04-17 00:58:35 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:35.802922 | orchestrator | 2026-04-17 00:58:35 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:35.803290 | orchestrator | 2026-04-17 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:38.856543 | orchestrator | 2026-04-17 00:58:38 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:38.857663 | orchestrator | 2026-04-17 00:58:38 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:38.857900 | orchestrator | 2026-04-17 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:41.902627 | orchestrator | 2026-04-17 00:58:41 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:41.904452 | orchestrator | 2026-04-17 00:58:41 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:41.904512 | orchestrator | 2026-04-17 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:44.943409 | orchestrator | 2026-04-17 00:58:44 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:44.945306 | orchestrator | 2026-04-17 00:58:44 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:44.945374 | orchestrator | 2026-04-17 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:47.986196 | orchestrator | 2026-04-17 00:58:47 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:47.987912 | orchestrator | 2026-04-17 00:58:47 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:47.987971 | orchestrator | 2026-04-17 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:51.031219 | orchestrator | 2026-04-17 00:58:51 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:51.033228 | orchestrator | 2026-04-17 00:58:51 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:51.033392 | orchestrator | 2026-04-17 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:54.072214 | orchestrator | 2026-04-17 00:58:54 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:54.075574 | orchestrator | 2026-04-17 00:58:54 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:54.076149 | orchestrator | 2026-04-17 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:58:57.121201 | orchestrator | 2026-04-17 00:58:57 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:58:57.123126 | orchestrator | 2026-04-17 00:58:57 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:58:57.123180 | orchestrator | 2026-04-17 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:00.168884 | orchestrator | 2026-04-17 00:59:00 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:00.171245 | orchestrator | 2026-04-17 00:59:00 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:59:00.171314 | orchestrator | 2026-04-17 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:03.211346 | orchestrator | 2026-04-17 00:59:03 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:03.212602 | orchestrator | 2026-04-17 00:59:03 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:59:03.212923 | orchestrator | 2026-04-17 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:06.253029 | orchestrator | 2026-04-17 00:59:06 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:06.253226 | orchestrator | 2026-04-17 00:59:06 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state STARTED 2026-04-17 00:59:06.253474 | orchestrator | 2026-04-17 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:09.292356 | orchestrator | 2026-04-17 00:59:09 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:09.293392 | orchestrator | 2026-04-17 00:59:09 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:09.295585 | orchestrator | 2026-04-17 00:59:09 | INFO  | Task 32a701bb-b569-4bfe-a787-ab7294f53236 is in state STARTED 2026-04-17 00:59:09.299452 | orchestrator | 2026-04-17 00:59:09 | INFO  | Task 22da693a-4ff3-4083-a962-d288b1570649 is in state SUCCESS 2026-04-17 00:59:09.301341 | orchestrator | 2026-04-17 00:59:09 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:09.301397 | orchestrator | 2026-04-17 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:12.346239 | orchestrator | 2026-04-17 00:59:12 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:12.348519 | orchestrator | 2026-04-17 00:59:12 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:12.349036 | orchestrator | 2026-04-17 00:59:12 | INFO  | Task 32a701bb-b569-4bfe-a787-ab7294f53236 is in state STARTED 2026-04-17 00:59:12.349645 | orchestrator | 2026-04-17 00:59:12 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:12.349837 | orchestrator | 2026-04-17 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:15.381332 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:15.381687 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:15.382735 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:15.383297 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task 32a701bb-b569-4bfe-a787-ab7294f53236 is in state SUCCESS 2026-04-17 00:59:15.384285 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:15.384977 | orchestrator | 2026-04-17 00:59:15 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:15.385012 | orchestrator | 2026-04-17 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:18.410471 | orchestrator | 2026-04-17 00:59:18 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:18.411123 | orchestrator | 2026-04-17 00:59:18 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:18.413118 | orchestrator | 2026-04-17 00:59:18 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:18.414536 | orchestrator | 2026-04-17 00:59:18 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:18.416275 | orchestrator | 2026-04-17 00:59:18 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:18.416317 | orchestrator | 2026-04-17 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:21.461227 | orchestrator | 2026-04-17 00:59:21 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:21.462724 | orchestrator | 2026-04-17 00:59:21 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:21.464196 | orchestrator | 2026-04-17 00:59:21 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:21.465083 | orchestrator | 2026-04-17 00:59:21 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:21.466193 | orchestrator | 2026-04-17 00:59:21 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:21.466503 | orchestrator | 2026-04-17 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:24.508564 | orchestrator | 2026-04-17 00:59:24 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:24.510520 | orchestrator | 2026-04-17 00:59:24 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:24.512651 | orchestrator | 2026-04-17 00:59:24 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state STARTED 2026-04-17 00:59:24.514550 | orchestrator | 2026-04-17 00:59:24 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:24.516604 | orchestrator | 2026-04-17 00:59:24 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:24.516894 | orchestrator | 2026-04-17 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:27.558289 | orchestrator | 2026-04-17 00:59:27 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:27.559683 | orchestrator | 2026-04-17 00:59:27 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:27.563384 | orchestrator | 2026-04-17 00:59:27 | INFO  | Task 505c49ae-7fcf-41da-8273-9915203cc8c5 is in state SUCCESS 2026-04-17 00:59:27.563974 | orchestrator | 2026-04-17 00:59:27.564004 | orchestrator | 2026-04-17 00:59:27.564010 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-17 00:59:27.564015 | orchestrator | 2026-04-17 00:59:27.564019 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-17 00:59:27.564024 | orchestrator | Friday 17 April 2026 00:58:15 +0000 (0:00:00.344) 0:00:00.344 ********** 2026-04-17 00:59:27.564028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-17 00:59:27.564034 | orchestrator | 2026-04-17 00:59:27.564038 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-17 00:59:27.564042 | orchestrator | Friday 17 April 2026 00:58:15 +0000 (0:00:00.205) 0:00:00.550 ********** 2026-04-17 00:59:27.564046 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-17 00:59:27.564068 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-17 00:59:27.564073 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-17 00:59:27.564077 | orchestrator | 2026-04-17 00:59:27.564081 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-17 00:59:27.564085 | orchestrator | Friday 17 April 2026 00:58:16 +0000 (0:00:01.362) 0:00:01.912 ********** 2026-04-17 00:59:27.564089 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-17 00:59:27.564093 | orchestrator | 2026-04-17 00:59:27.564097 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-17 00:59:27.564101 | orchestrator | Friday 17 April 2026 00:58:17 +0000 (0:00:01.040) 0:00:02.952 ********** 2026-04-17 00:59:27.564105 | orchestrator | changed: [testbed-manager] 2026-04-17 00:59:27.564109 | orchestrator | 2026-04-17 00:59:27.564113 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-17 00:59:27.564116 | orchestrator | Friday 17 April 2026 00:58:18 +0000 (0:00:00.833) 0:00:03.786 ********** 2026-04-17 00:59:27.564120 | orchestrator | changed: [testbed-manager] 2026-04-17 00:59:27.564124 | orchestrator | 2026-04-17 00:59:27.564128 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-17 00:59:27.564131 | orchestrator | Friday 17 April 2026 00:58:19 +0000 (0:00:00.793) 0:00:04.579 ********** 2026-04-17 00:59:27.564135 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-17 00:59:27.564139 | orchestrator | ok: [testbed-manager] 2026-04-17 00:59:27.564143 | orchestrator | 2026-04-17 00:59:27.564147 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-17 00:59:27.564151 | orchestrator | Friday 17 April 2026 00:58:58 +0000 (0:00:39.158) 0:00:43.738 ********** 2026-04-17 00:59:27.564155 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-17 00:59:27.564159 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-17 00:59:27.564163 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-17 00:59:27.564166 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-17 00:59:27.564170 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-17 00:59:27.564174 | orchestrator | 2026-04-17 00:59:27.564178 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-17 00:59:27.564181 | orchestrator | Friday 17 April 2026 00:59:02 +0000 (0:00:04.072) 0:00:47.810 ********** 2026-04-17 00:59:27.564185 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-17 00:59:27.564189 | orchestrator | 2026-04-17 00:59:27.564192 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-17 00:59:27.564196 | orchestrator | Friday 17 April 2026 00:59:03 +0000 (0:00:00.613) 0:00:48.423 ********** 2026-04-17 00:59:27.564200 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:59:27.564203 | orchestrator | 2026-04-17 00:59:27.564207 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-17 00:59:27.564211 | orchestrator | Friday 17 April 2026 00:59:03 +0000 (0:00:00.120) 0:00:48.544 ********** 2026-04-17 00:59:27.564214 | orchestrator | skipping: [testbed-manager] 2026-04-17 00:59:27.564218 | orchestrator | 2026-04-17 00:59:27.564222 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-17 00:59:27.564225 | orchestrator | Friday 17 April 2026 00:59:03 +0000 (0:00:00.295) 0:00:48.839 ********** 2026-04-17 00:59:27.564229 | orchestrator | changed: [testbed-manager] 2026-04-17 00:59:27.564233 | orchestrator | 2026-04-17 00:59:27.564247 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-17 00:59:27.564250 | orchestrator | Friday 17 April 2026 00:59:04 +0000 (0:00:01.338) 0:00:50.178 ********** 2026-04-17 00:59:27.564254 | orchestrator | changed: [testbed-manager] 2026-04-17 00:59:27.564258 | orchestrator | 2026-04-17 00:59:27.564262 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-17 00:59:27.564265 | orchestrator | Friday 17 April 2026 00:59:05 +0000 (0:00:00.694) 0:00:50.873 ********** 2026-04-17 00:59:27.564274 | orchestrator | changed: [testbed-manager] 2026-04-17 00:59:27.564278 | orchestrator | 2026-04-17 00:59:27.564282 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-17 00:59:27.564286 | orchestrator | Friday 17 April 2026 00:59:06 +0000 (0:00:00.589) 0:00:51.462 ********** 2026-04-17 00:59:27.564290 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-17 00:59:27.564293 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-17 00:59:27.564297 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-17 00:59:27.564301 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-17 00:59:27.564305 | orchestrator | 2026-04-17 00:59:27.564308 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:59:27.564312 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 00:59:27.564317 | orchestrator | 2026-04-17 00:59:27.564321 | orchestrator | 2026-04-17 00:59:27.564331 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:59:27.564335 | orchestrator | Friday 17 April 2026 00:59:07 +0000 (0:00:01.435) 0:00:52.898 ********** 2026-04-17 00:59:27.564339 | orchestrator | =============================================================================== 2026-04-17 00:59:27.564343 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.16s 2026-04-17 00:59:27.564346 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.07s 2026-04-17 00:59:27.564350 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.44s 2026-04-17 00:59:27.564354 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2026-04-17 00:59:27.564357 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.34s 2026-04-17 00:59:27.564361 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.04s 2026-04-17 00:59:27.564365 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2026-04-17 00:59:27.564368 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.79s 2026-04-17 00:59:27.564372 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-17 00:59:27.564375 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.61s 2026-04-17 00:59:27.564379 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2026-04-17 00:59:27.564383 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2026-04-17 00:59:27.564386 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-04-17 00:59:27.564390 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-04-17 00:59:27.564394 | orchestrator | 2026-04-17 00:59:27.564398 | orchestrator | 2026-04-17 00:59:27.564401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:59:27.564405 | orchestrator | 2026-04-17 00:59:27.564409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:59:27.564413 | orchestrator | Friday 17 April 2026 00:59:11 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-04-17 00:59:27.564416 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.564420 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.564424 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.564427 | orchestrator | 2026-04-17 00:59:27.564431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:59:27.564435 | orchestrator | Friday 17 April 2026 00:59:11 +0000 (0:00:00.345) 0:00:00.533 ********** 2026-04-17 00:59:27.564438 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-17 00:59:27.564442 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-17 00:59:27.564446 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-17 00:59:27.564453 | orchestrator | 2026-04-17 00:59:27.564457 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-17 00:59:27.564461 | orchestrator | 2026-04-17 00:59:27.564535 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-17 00:59:27.564541 | orchestrator | Friday 17 April 2026 00:59:11 +0000 (0:00:00.526) 0:00:01.060 ********** 2026-04-17 00:59:27.564545 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.564549 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.564552 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.564556 | orchestrator | 2026-04-17 00:59:27.564560 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:59:27.564565 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:59:27.564639 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:59:27.564643 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 00:59:27.564647 | orchestrator | 2026-04-17 00:59:27.564651 | orchestrator | 2026-04-17 00:59:27.564656 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:59:27.564662 | orchestrator | Friday 17 April 2026 00:59:13 +0000 (0:00:01.058) 0:00:02.118 ********** 2026-04-17 00:59:27.564669 | orchestrator | =============================================================================== 2026-04-17 00:59:27.564682 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.06s 2026-04-17 00:59:27.564690 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-17 00:59:27.564696 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-17 00:59:27.564702 | orchestrator | 2026-04-17 00:59:27.565366 | orchestrator | 2026-04-17 00:59:27.565392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 00:59:27.565398 | orchestrator | 2026-04-17 00:59:27.565402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 00:59:27.565407 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.319) 0:00:00.320 ********** 2026-04-17 00:59:27.565411 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.565416 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.565421 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.565425 | orchestrator | 2026-04-17 00:59:27.565429 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 00:59:27.565434 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.281) 0:00:00.601 ********** 2026-04-17 00:59:27.565438 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-17 00:59:27.565443 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-17 00:59:27.565448 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-17 00:59:27.565452 | orchestrator | 2026-04-17 00:59:27.565456 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-17 00:59:27.565460 | orchestrator | 2026-04-17 00:59:27.565465 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.565469 | orchestrator | Friday 17 April 2026 00:56:48 +0000 (0:00:00.269) 0:00:00.871 ********** 2026-04-17 00:59:27.565474 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:59:27.565479 | orchestrator | 2026-04-17 00:59:27.565483 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-17 00:59:27.565487 | orchestrator | Friday 17 April 2026 00:56:49 +0000 (0:00:00.622) 0:00:01.494 ********** 2026-04-17 00:59:27.565495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565565 | orchestrator | 2026-04-17 00:59:27.565571 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-17 00:59:27.565575 | orchestrator | Friday 17 April 2026 00:56:51 +0000 (0:00:02.251) 0:00:03.746 ********** 2026-04-17 00:59:27.565578 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.565583 | orchestrator | 2026-04-17 00:59:27.565589 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-17 00:59:27.565593 | orchestrator | Friday 17 April 2026 00:56:51 +0000 (0:00:00.116) 0:00:03.862 ********** 2026-04-17 00:59:27.565597 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.565600 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.565604 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.565608 | orchestrator | 2026-04-17 00:59:27.565611 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-17 00:59:27.565615 | orchestrator | Friday 17 April 2026 00:56:51 +0000 (0:00:00.251) 0:00:04.114 ********** 2026-04-17 00:59:27.565619 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:59:27.565622 | orchestrator | 2026-04-17 00:59:27.565626 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.565630 | orchestrator | Friday 17 April 2026 00:56:52 +0000 (0:00:00.877) 0:00:04.991 ********** 2026-04-17 00:59:27.565634 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:59:27.565640 | orchestrator | 2026-04-17 00:59:27.565644 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-17 00:59:27.565648 | orchestrator | Friday 17 April 2026 00:56:53 +0000 (0:00:00.654) 0:00:05.646 ********** 2026-04-17 00:59:27.565652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.565671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.565703 | orchestrator | 2026-04-17 00:59:27.565707 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-17 00:59:27.565711 | orchestrator | Friday 17 April 2026 00:56:56 +0000 (0:00:03.139) 0:00:08.786 ********** 2026-04-17 00:59:27.565721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.565729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.565733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.565737 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.565741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.565745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.565751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.565984 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566066 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566072 | orchestrator | 2026-04-17 00:59:27.566078 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-17 00:59:27.566084 | orchestrator | Friday 17 April 2026 00:56:57 +0000 (0:00:00.518) 0:00:09.304 ********** 2026-04-17 00:59:27.566091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566140 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566152 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566178 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566182 | orchestrator | 2026-04-17 00:59:27.566186 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-17 00:59:27.566190 | orchestrator | Friday 17 April 2026 00:56:58 +0000 (0:00:00.901) 0:00:10.206 ********** 2026-04-17 00:59:27.566194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566243 | orchestrator | 2026-04-17 00:59:27.566249 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-17 00:59:27.566253 | orchestrator | Friday 17 April 2026 00:57:01 +0000 (0:00:03.216) 0:00:13.423 ********** 2026-04-17 00:59:27.566261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.566289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.566305 | orchestrator | 2026-04-17 00:59:27.566309 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-17 00:59:27.566313 | orchestrator | Friday 17 April 2026 00:57:06 +0000 (0:00:04.851) 0:00:18.274 ********** 2026-04-17 00:59:27.566317 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.566320 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:59:27.566329 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:59:27.566333 | orchestrator | 2026-04-17 00:59:27.566337 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-17 00:59:27.566341 | orchestrator | Friday 17 April 2026 00:57:07 +0000 (0:00:01.507) 0:00:19.781 ********** 2026-04-17 00:59:27.566344 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566348 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566352 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566355 | orchestrator | 2026-04-17 00:59:27.566359 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-17 00:59:27.566363 | orchestrator | Friday 17 April 2026 00:57:08 +0000 (0:00:00.956) 0:00:20.738 ********** 2026-04-17 00:59:27.566367 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566370 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566374 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566378 | orchestrator | 2026-04-17 00:59:27.566381 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-17 00:59:27.566385 | orchestrator | Friday 17 April 2026 00:57:08 +0000 (0:00:00.281) 0:00:21.019 ********** 2026-04-17 00:59:27.566389 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566392 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566396 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566400 | orchestrator | 2026-04-17 00:59:27.566403 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-17 00:59:27.566407 | orchestrator | Friday 17 April 2026 00:57:09 +0000 (0:00:00.267) 0:00:21.287 ********** 2026-04-17 00:59:27.566417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566430 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566453 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-17 00:59:27.566461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-17 00:59:27.566465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-17 00:59:27.566472 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566476 | orchestrator | 2026-04-17 00:59:27.566479 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.566483 | orchestrator | Friday 17 April 2026 00:57:09 +0000 (0:00:00.529) 0:00:21.816 ********** 2026-04-17 00:59:27.566487 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566491 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566494 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566498 | orchestrator | 2026-04-17 00:59:27.566502 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-17 00:59:27.566505 | orchestrator | Friday 17 April 2026 00:57:10 +0000 (0:00:00.427) 0:00:22.244 ********** 2026-04-17 00:59:27.566509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 00:59:27.566513 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 00:59:27.566517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-17 00:59:27.566521 | orchestrator | 2026-04-17 00:59:27.566525 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-17 00:59:27.566528 | orchestrator | Friday 17 April 2026 00:57:11 +0000 (0:00:01.685) 0:00:23.929 ********** 2026-04-17 00:59:27.566532 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:59:27.566536 | orchestrator | 2026-04-17 00:59:27.566540 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-17 00:59:27.566543 | orchestrator | Friday 17 April 2026 00:57:12 +0000 (0:00:00.985) 0:00:24.915 ********** 2026-04-17 00:59:27.566547 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.566551 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.566555 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.566558 | orchestrator | 2026-04-17 00:59:27.566565 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-17 00:59:27.566571 | orchestrator | Friday 17 April 2026 00:57:13 +0000 (0:00:00.518) 0:00:25.434 ********** 2026-04-17 00:59:27.566577 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 00:59:27.566589 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 00:59:27.566600 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 00:59:27.566605 | orchestrator | 2026-04-17 00:59:27.566611 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-17 00:59:27.566620 | orchestrator | Friday 17 April 2026 00:57:14 +0000 (0:00:01.155) 0:00:26.589 ********** 2026-04-17 00:59:27.566626 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.566632 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.566638 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.566643 | orchestrator | 2026-04-17 00:59:27.566649 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-17 00:59:27.566655 | orchestrator | Friday 17 April 2026 00:57:14 +0000 (0:00:00.422) 0:00:27.011 ********** 2026-04-17 00:59:27.566660 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 00:59:27.566665 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 00:59:27.566671 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-17 00:59:27.566677 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 00:59:27.566792 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 00:59:27.566801 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-17 00:59:27.566807 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 00:59:27.566813 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 00:59:27.566820 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-17 00:59:27.566826 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 00:59:27.566833 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 00:59:27.566840 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-17 00:59:27.566847 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 00:59:27.566853 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 00:59:27.566861 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-17 00:59:27.566868 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 00:59:27.566875 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 00:59:27.566882 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 00:59:27.566889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 00:59:27.566896 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 00:59:27.566902 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 00:59:27.566908 | orchestrator | 2026-04-17 00:59:27.566914 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-17 00:59:27.566920 | orchestrator | Friday 17 April 2026 00:57:23 +0000 (0:00:08.726) 0:00:35.738 ********** 2026-04-17 00:59:27.566926 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 00:59:27.566932 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 00:59:27.566937 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 00:59:27.566943 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 00:59:27.566949 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 00:59:27.566955 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 00:59:27.566961 | orchestrator | 2026-04-17 00:59:27.566967 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-17 00:59:27.566973 | orchestrator | Friday 17 April 2026 00:57:26 +0000 (0:00:02.580) 0:00:38.318 ********** 2026-04-17 00:59:27.566990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.567002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.567010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-17 00:59:27.567017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-17 00:59:27.567133 | orchestrator | 2026-04-17 00:59:27.567139 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.567145 | orchestrator | Friday 17 April 2026 00:57:28 +0000 (0:00:02.383) 0:00:40.701 ********** 2026-04-17 00:59:27.567151 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567157 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.567163 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.567169 | orchestrator | 2026-04-17 00:59:27.567175 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-17 00:59:27.567181 | orchestrator | Friday 17 April 2026 00:57:29 +0000 (0:00:00.432) 0:00:41.134 ********** 2026-04-17 00:59:27.567188 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567194 | orchestrator | 2026-04-17 00:59:27.567200 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-17 00:59:27.567206 | orchestrator | Friday 17 April 2026 00:57:31 +0000 (0:00:02.457) 0:00:43.591 ********** 2026-04-17 00:59:27.567212 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567219 | orchestrator | 2026-04-17 00:59:27.567225 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-17 00:59:27.567231 | orchestrator | Friday 17 April 2026 00:57:33 +0000 (0:00:02.470) 0:00:46.062 ********** 2026-04-17 00:59:27.567237 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.567243 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.567249 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.567255 | orchestrator | 2026-04-17 00:59:27.567262 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-17 00:59:27.567268 | orchestrator | Friday 17 April 2026 00:57:34 +0000 (0:00:00.844) 0:00:46.906 ********** 2026-04-17 00:59:27.567278 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.567284 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.567291 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.567296 | orchestrator | 2026-04-17 00:59:27.567302 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-17 00:59:27.567308 | orchestrator | Friday 17 April 2026 00:57:35 +0000 (0:00:00.276) 0:00:47.183 ********** 2026-04-17 00:59:27.567314 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567321 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.567326 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.567332 | orchestrator | 2026-04-17 00:59:27.567337 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-17 00:59:27.567342 | orchestrator | Friday 17 April 2026 00:57:35 +0000 (0:00:00.283) 0:00:47.467 ********** 2026-04-17 00:59:27.567348 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567353 | orchestrator | 2026-04-17 00:59:27.567359 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-17 00:59:27.567365 | orchestrator | Friday 17 April 2026 00:57:51 +0000 (0:00:15.920) 0:01:03.387 ********** 2026-04-17 00:59:27.567371 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567378 | orchestrator | 2026-04-17 00:59:27.567384 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 00:59:27.567391 | orchestrator | Friday 17 April 2026 00:58:03 +0000 (0:00:12.538) 0:01:15.925 ********** 2026-04-17 00:59:27.567396 | orchestrator | 2026-04-17 00:59:27.567405 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 00:59:27.567412 | orchestrator | Friday 17 April 2026 00:58:03 +0000 (0:00:00.063) 0:01:15.988 ********** 2026-04-17 00:59:27.567418 | orchestrator | 2026-04-17 00:59:27.567424 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-17 00:59:27.567436 | orchestrator | Friday 17 April 2026 00:58:03 +0000 (0:00:00.061) 0:01:16.050 ********** 2026-04-17 00:59:27.567444 | orchestrator | 2026-04-17 00:59:27.567450 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-17 00:59:27.567457 | orchestrator | Friday 17 April 2026 00:58:03 +0000 (0:00:00.062) 0:01:16.113 ********** 2026-04-17 00:59:27.567463 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567469 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:59:27.567474 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:59:27.567481 | orchestrator | 2026-04-17 00:59:27.567486 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-17 00:59:27.567492 | orchestrator | Friday 17 April 2026 00:58:15 +0000 (0:00:11.180) 0:01:27.293 ********** 2026-04-17 00:59:27.567499 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:59:27.567505 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:59:27.567511 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567517 | orchestrator | 2026-04-17 00:59:27.567523 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-17 00:59:27.567528 | orchestrator | Friday 17 April 2026 00:58:22 +0000 (0:00:07.293) 0:01:34.587 ********** 2026-04-17 00:59:27.567534 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567540 | orchestrator | changed: [testbed-node-2] 2026-04-17 00:59:27.567546 | orchestrator | changed: [testbed-node-1] 2026-04-17 00:59:27.567552 | orchestrator | 2026-04-17 00:59:27.567559 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.567564 | orchestrator | Friday 17 April 2026 00:58:28 +0000 (0:00:06.367) 0:01:40.955 ********** 2026-04-17 00:59:27.567570 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 00:59:27.567575 | orchestrator | 2026-04-17 00:59:27.567580 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-17 00:59:27.567586 | orchestrator | Friday 17 April 2026 00:58:29 +0000 (0:00:00.774) 0:01:41.730 ********** 2026-04-17 00:59:27.567591 | orchestrator | ok: [testbed-node-1] 2026-04-17 00:59:27.567601 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.567607 | orchestrator | ok: [testbed-node-2] 2026-04-17 00:59:27.567612 | orchestrator | 2026-04-17 00:59:27.567619 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-17 00:59:27.567624 | orchestrator | Friday 17 April 2026 00:58:30 +0000 (0:00:00.796) 0:01:42.526 ********** 2026-04-17 00:59:27.567631 | orchestrator | changed: [testbed-node-0] 2026-04-17 00:59:27.567636 | orchestrator | 2026-04-17 00:59:27.567643 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-17 00:59:27.567648 | orchestrator | Friday 17 April 2026 00:58:32 +0000 (0:00:01.660) 0:01:44.187 ********** 2026-04-17 00:59:27.567654 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-17 00:59:27.567659 | orchestrator | 2026-04-17 00:59:27.567665 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-17 00:59:27.567672 | orchestrator | Friday 17 April 2026 00:58:45 +0000 (0:00:13.648) 0:01:57.835 ********** 2026-04-17 00:59:27.567677 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-17 00:59:27.567684 | orchestrator | 2026-04-17 00:59:27.567690 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-17 00:59:27.567696 | orchestrator | Friday 17 April 2026 00:59:13 +0000 (0:00:27.896) 0:02:25.732 ********** 2026-04-17 00:59:27.567702 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-17 00:59:27.567709 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-17 00:59:27.567715 | orchestrator | 2026-04-17 00:59:27.567721 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-17 00:59:27.567728 | orchestrator | Friday 17 April 2026 00:59:21 +0000 (0:00:07.856) 0:02:33.588 ********** 2026-04-17 00:59:27.567734 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567740 | orchestrator | 2026-04-17 00:59:27.567747 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-17 00:59:27.567753 | orchestrator | Friday 17 April 2026 00:59:21 +0000 (0:00:00.128) 0:02:33.717 ********** 2026-04-17 00:59:27.567760 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567766 | orchestrator | 2026-04-17 00:59:27.567801 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-17 00:59:27.567807 | orchestrator | Friday 17 April 2026 00:59:21 +0000 (0:00:00.106) 0:02:33.824 ********** 2026-04-17 00:59:27.567813 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567819 | orchestrator | 2026-04-17 00:59:27.567825 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-17 00:59:27.567831 | orchestrator | Friday 17 April 2026 00:59:21 +0000 (0:00:00.128) 0:02:33.953 ********** 2026-04-17 00:59:27.567837 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567843 | orchestrator | 2026-04-17 00:59:27.567849 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-17 00:59:27.567855 | orchestrator | Friday 17 April 2026 00:59:22 +0000 (0:00:00.315) 0:02:34.268 ********** 2026-04-17 00:59:27.567861 | orchestrator | ok: [testbed-node-0] 2026-04-17 00:59:27.567867 | orchestrator | 2026-04-17 00:59:27.567873 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-17 00:59:27.567879 | orchestrator | Friday 17 April 2026 00:59:25 +0000 (0:00:03.615) 0:02:37.884 ********** 2026-04-17 00:59:27.567885 | orchestrator | skipping: [testbed-node-0] 2026-04-17 00:59:27.567891 | orchestrator | skipping: [testbed-node-1] 2026-04-17 00:59:27.567897 | orchestrator | skipping: [testbed-node-2] 2026-04-17 00:59:27.567904 | orchestrator | 2026-04-17 00:59:27.567910 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 00:59:27.567922 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 00:59:27.567934 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:59:27.567945 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 00:59:27.567952 | orchestrator | 2026-04-17 00:59:27.567958 | orchestrator | 2026-04-17 00:59:27.567965 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 00:59:27.567971 | orchestrator | Friday 17 April 2026 00:59:26 +0000 (0:00:00.586) 0:02:38.470 ********** 2026-04-17 00:59:27.567978 | orchestrator | =============================================================================== 2026-04-17 00:59:27.567984 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.90s 2026-04-17 00:59:27.567990 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.92s 2026-04-17 00:59:27.567996 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.65s 2026-04-17 00:59:27.568002 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.54s 2026-04-17 00:59:27.568009 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 11.18s 2026-04-17 00:59:27.568015 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.73s 2026-04-17 00:59:27.568021 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.86s 2026-04-17 00:59:27.568026 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.29s 2026-04-17 00:59:27.568032 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.37s 2026-04-17 00:59:27.568037 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.85s 2026-04-17 00:59:27.568043 | orchestrator | keystone : Creating default user role ----------------------------------- 3.62s 2026-04-17 00:59:27.568048 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.22s 2026-04-17 00:59:27.568054 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.14s 2026-04-17 00:59:27.568060 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.58s 2026-04-17 00:59:27.568066 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.47s 2026-04-17 00:59:27.568072 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.46s 2026-04-17 00:59:27.568078 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.38s 2026-04-17 00:59:27.568084 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.25s 2026-04-17 00:59:27.568091 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.69s 2026-04-17 00:59:27.568097 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.66s 2026-04-17 00:59:27.568104 | orchestrator | 2026-04-17 00:59:27 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:27.568395 | orchestrator | 2026-04-17 00:59:27 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:27.568541 | orchestrator | 2026-04-17 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:30.620049 | orchestrator | 2026-04-17 00:59:30 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:30.620574 | orchestrator | 2026-04-17 00:59:30 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:30.623129 | orchestrator | 2026-04-17 00:59:30 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:30.624947 | orchestrator | 2026-04-17 00:59:30 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:30.625931 | orchestrator | 2026-04-17 00:59:30 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:30.625968 | orchestrator | 2026-04-17 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:33.664893 | orchestrator | 2026-04-17 00:59:33 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:33.668272 | orchestrator | 2026-04-17 00:59:33 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:33.670195 | orchestrator | 2026-04-17 00:59:33 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:33.671473 | orchestrator | 2026-04-17 00:59:33 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:33.673188 | orchestrator | 2026-04-17 00:59:33 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:33.673244 | orchestrator | 2026-04-17 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:36.840696 | orchestrator | 2026-04-17 00:59:36 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:36.840834 | orchestrator | 2026-04-17 00:59:36 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:36.840848 | orchestrator | 2026-04-17 00:59:36 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:36.840854 | orchestrator | 2026-04-17 00:59:36 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:36.840860 | orchestrator | 2026-04-17 00:59:36 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:36.840867 | orchestrator | 2026-04-17 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:39.746714 | orchestrator | 2026-04-17 00:59:39 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:39.746787 | orchestrator | 2026-04-17 00:59:39 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:39.747237 | orchestrator | 2026-04-17 00:59:39 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:39.749057 | orchestrator | 2026-04-17 00:59:39 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:39.749476 | orchestrator | 2026-04-17 00:59:39 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:39.749501 | orchestrator | 2026-04-17 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:42.772593 | orchestrator | 2026-04-17 00:59:42 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:42.772683 | orchestrator | 2026-04-17 00:59:42 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:42.774153 | orchestrator | 2026-04-17 00:59:42 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:42.774446 | orchestrator | 2026-04-17 00:59:42 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:42.775259 | orchestrator | 2026-04-17 00:59:42 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:42.775305 | orchestrator | 2026-04-17 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:45.809079 | orchestrator | 2026-04-17 00:59:45 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:45.809182 | orchestrator | 2026-04-17 00:59:45 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:45.809563 | orchestrator | 2026-04-17 00:59:45 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:45.810223 | orchestrator | 2026-04-17 00:59:45 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:45.810614 | orchestrator | 2026-04-17 00:59:45 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:45.810645 | orchestrator | 2026-04-17 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:48.842558 | orchestrator | 2026-04-17 00:59:48 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:48.842995 | orchestrator | 2026-04-17 00:59:48 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:48.843841 | orchestrator | 2026-04-17 00:59:48 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:48.844418 | orchestrator | 2026-04-17 00:59:48 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:48.845226 | orchestrator | 2026-04-17 00:59:48 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:48.845253 | orchestrator | 2026-04-17 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:51.867729 | orchestrator | 2026-04-17 00:59:51 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:51.868505 | orchestrator | 2026-04-17 00:59:51 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:51.870264 | orchestrator | 2026-04-17 00:59:51 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:51.870657 | orchestrator | 2026-04-17 00:59:51 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state STARTED 2026-04-17 00:59:51.871503 | orchestrator | 2026-04-17 00:59:51 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:51.871540 | orchestrator | 2026-04-17 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:54.894414 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:54.894633 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 00:59:54.894656 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:54.895646 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:54.895712 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task 32451adc-ab4d-4683-845d-fce47bdefc70 is in state SUCCESS 2026-04-17 00:59:54.896755 | orchestrator | 2026-04-17 00:59:54 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:54.896811 | orchestrator | 2026-04-17 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 00:59:57.919267 | orchestrator | 2026-04-17 00:59:57 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 00:59:57.919417 | orchestrator | 2026-04-17 00:59:57 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 00:59:57.920546 | orchestrator | 2026-04-17 00:59:57 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 00:59:57.921269 | orchestrator | 2026-04-17 00:59:57 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 00:59:57.922503 | orchestrator | 2026-04-17 00:59:57 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state STARTED 2026-04-17 00:59:57.922531 | orchestrator | 2026-04-17 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:00.960241 | orchestrator | 2026-04-17 01:00:00 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:00.960336 | orchestrator | 2026-04-17 01:00:00 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:00.961170 | orchestrator | 2026-04-17 01:00:00 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:00.961706 | orchestrator | 2026-04-17 01:00:00 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:00.962575 | orchestrator | 2026-04-17 01:00:00.962602 | orchestrator | 2026-04-17 01:00:00.962609 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:00:00.962614 | orchestrator | 2026-04-17 01:00:00.962618 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:00:00.962622 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-04-17 01:00:00.962626 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:00:00.962631 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:00:00.962635 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:00:00.962639 | orchestrator | ok: [testbed-manager] 2026-04-17 01:00:00.962643 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:00:00.962647 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:00:00.962651 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:00:00.962655 | orchestrator | 2026-04-17 01:00:00.962659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:00:00.962662 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:00.710) 0:00:01.001 ********** 2026-04-17 01:00:00.962666 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962671 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962675 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962679 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962682 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962686 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962690 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-17 01:00:00.962694 | orchestrator | 2026-04-17 01:00:00.962697 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-17 01:00:00.962701 | orchestrator | 2026-04-17 01:00:00.962705 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-17 01:00:00.962709 | orchestrator | Friday 17 April 2026 00:59:18 +0000 (0:00:00.771) 0:00:01.773 ********** 2026-04-17 01:00:00.962713 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:00:00.962718 | orchestrator | 2026-04-17 01:00:00.962722 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-17 01:00:00.962725 | orchestrator | Friday 17 April 2026 00:59:20 +0000 (0:00:01.791) 0:00:03.564 ********** 2026-04-17 01:00:00.962729 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-17 01:00:00.962733 | orchestrator | 2026-04-17 01:00:00.962737 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-17 01:00:00.962741 | orchestrator | Friday 17 April 2026 00:59:24 +0000 (0:00:04.236) 0:00:07.800 ********** 2026-04-17 01:00:00.962745 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-17 01:00:00.962761 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-17 01:00:00.962765 | orchestrator | 2026-04-17 01:00:00.962769 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-17 01:00:00.962773 | orchestrator | Friday 17 April 2026 00:59:31 +0000 (0:00:07.154) 0:00:14.956 ********** 2026-04-17 01:00:00.962777 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:00:00.962781 | orchestrator | 2026-04-17 01:00:00.962784 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-17 01:00:00.962799 | orchestrator | Friday 17 April 2026 00:59:35 +0000 (0:00:04.000) 0:00:18.956 ********** 2026-04-17 01:00:00.962803 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-17 01:00:00.962807 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:00:00.962810 | orchestrator | 2026-04-17 01:00:00.962814 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-17 01:00:00.962818 | orchestrator | Friday 17 April 2026 00:59:40 +0000 (0:00:04.281) 0:00:23.238 ********** 2026-04-17 01:00:00.962822 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:00:00.962826 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-17 01:00:00.962848 | orchestrator | 2026-04-17 01:00:00.962853 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-17 01:00:00.962856 | orchestrator | Friday 17 April 2026 00:59:47 +0000 (0:00:07.100) 0:00:30.338 ********** 2026-04-17 01:00:00.962860 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-17 01:00:00.962864 | orchestrator | 2026-04-17 01:00:00.962868 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:00:00.962872 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962876 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962880 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962884 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962888 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962899 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962903 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.962907 | orchestrator | 2026-04-17 01:00:00.962911 | orchestrator | 2026-04-17 01:00:00.962915 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:00:00.962919 | orchestrator | Friday 17 April 2026 00:59:52 +0000 (0:00:05.059) 0:00:35.397 ********** 2026-04-17 01:00:00.962923 | orchestrator | =============================================================================== 2026-04-17 01:00:00.962927 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.16s 2026-04-17 01:00:00.962931 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.10s 2026-04-17 01:00:00.962934 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.06s 2026-04-17 01:00:00.962938 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.28s 2026-04-17 01:00:00.962942 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.24s 2026-04-17 01:00:00.962946 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.00s 2026-04-17 01:00:00.962949 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.79s 2026-04-17 01:00:00.962953 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-04-17 01:00:00.962957 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-04-17 01:00:00.962961 | orchestrator | 2026-04-17 01:00:00.962964 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-17 01:00:00.962968 | orchestrator | 2.16.14 2026-04-17 01:00:00.962972 | orchestrator | 2026-04-17 01:00:00.962976 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-17 01:00:00.962983 | orchestrator | 2026-04-17 01:00:00.962987 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-17 01:00:00.962990 | orchestrator | Friday 17 April 2026 00:59:12 +0000 (0:00:00.221) 0:00:00.221 ********** 2026-04-17 01:00:00.962995 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.962998 | orchestrator | 2026-04-17 01:00:00.963002 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-17 01:00:00.963006 | orchestrator | Friday 17 April 2026 00:59:14 +0000 (0:00:02.162) 0:00:02.384 ********** 2026-04-17 01:00:00.963009 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963013 | orchestrator | 2026-04-17 01:00:00.963017 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-17 01:00:00.963021 | orchestrator | Friday 17 April 2026 00:59:15 +0000 (0:00:00.953) 0:00:03.337 ********** 2026-04-17 01:00:00.963024 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963028 | orchestrator | 2026-04-17 01:00:00.963032 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-17 01:00:00.963036 | orchestrator | Friday 17 April 2026 00:59:16 +0000 (0:00:01.073) 0:00:04.411 ********** 2026-04-17 01:00:00.963039 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963043 | orchestrator | 2026-04-17 01:00:00.963050 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-17 01:00:00.963054 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:01.007) 0:00:05.419 ********** 2026-04-17 01:00:00.963064 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963068 | orchestrator | 2026-04-17 01:00:00.963072 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-17 01:00:00.963081 | orchestrator | Friday 17 April 2026 00:59:18 +0000 (0:00:00.833) 0:00:06.252 ********** 2026-04-17 01:00:00.963085 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963089 | orchestrator | 2026-04-17 01:00:00.963093 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-17 01:00:00.963096 | orchestrator | Friday 17 April 2026 00:59:19 +0000 (0:00:00.851) 0:00:07.103 ********** 2026-04-17 01:00:00.963102 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963108 | orchestrator | 2026-04-17 01:00:00.963114 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-17 01:00:00.963121 | orchestrator | Friday 17 April 2026 00:59:20 +0000 (0:00:01.317) 0:00:08.421 ********** 2026-04-17 01:00:00.963126 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963132 | orchestrator | 2026-04-17 01:00:00.963137 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-17 01:00:00.963143 | orchestrator | Friday 17 April 2026 00:59:21 +0000 (0:00:01.059) 0:00:09.480 ********** 2026-04-17 01:00:00.963150 | orchestrator | changed: [testbed-manager] 2026-04-17 01:00:00.963156 | orchestrator | 2026-04-17 01:00:00.963162 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-17 01:00:00.963168 | orchestrator | Friday 17 April 2026 00:59:35 +0000 (0:00:13.989) 0:00:23.470 ********** 2026-04-17 01:00:00.963175 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:00:00.963181 | orchestrator | 2026-04-17 01:00:00.963188 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 01:00:00.963193 | orchestrator | 2026-04-17 01:00:00.963200 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 01:00:00.963207 | orchestrator | Friday 17 April 2026 00:59:35 +0000 (0:00:00.163) 0:00:23.633 ********** 2026-04-17 01:00:00.963214 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:00:00.963220 | orchestrator | 2026-04-17 01:00:00.963226 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 01:00:00.963233 | orchestrator | 2026-04-17 01:00:00.963240 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 01:00:00.963246 | orchestrator | Friday 17 April 2026 00:59:47 +0000 (0:00:11.868) 0:00:35.502 ********** 2026-04-17 01:00:00.963259 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:00:00.963266 | orchestrator | 2026-04-17 01:00:00.963271 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-17 01:00:00.963278 | orchestrator | 2026-04-17 01:00:00.963285 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-17 01:00:00.963296 | orchestrator | Friday 17 April 2026 00:59:49 +0000 (0:00:01.531) 0:00:37.033 ********** 2026-04-17 01:00:00.963303 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:00:00.963309 | orchestrator | 2026-04-17 01:00:00.963315 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:00:00.963321 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-17 01:00:00.963328 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.963336 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.963343 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:00:00.963350 | orchestrator | 2026-04-17 01:00:00.963356 | orchestrator | 2026-04-17 01:00:00.963363 | orchestrator | 2026-04-17 01:00:00.963369 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:00:00.963375 | orchestrator | Friday 17 April 2026 01:00:00 +0000 (0:00:11.503) 0:00:48.537 ********** 2026-04-17 01:00:00.963382 | orchestrator | =============================================================================== 2026-04-17 01:00:00.963388 | orchestrator | Restart ceph manager service ------------------------------------------- 24.90s 2026-04-17 01:00:00.963394 | orchestrator | Create admin user ------------------------------------------------------ 13.99s 2026-04-17 01:00:00.963400 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.16s 2026-04-17 01:00:00.963407 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.32s 2026-04-17 01:00:00.963413 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-04-17 01:00:00.963419 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2026-04-17 01:00:00.963425 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.01s 2026-04-17 01:00:00.963432 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.95s 2026-04-17 01:00:00.963438 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.85s 2026-04-17 01:00:00.963445 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.83s 2026-04-17 01:00:00.963451 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-17 01:00:00.963457 | orchestrator | 2026-04-17 01:00:00 | INFO  | Task 0f063cb1-d90b-4344-bae7-85aea2da4094 is in state SUCCESS 2026-04-17 01:00:00.963464 | orchestrator | 2026-04-17 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:03.985803 | orchestrator | 2026-04-17 01:00:03 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:03.986001 | orchestrator | 2026-04-17 01:00:03 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:03.987562 | orchestrator | 2026-04-17 01:00:03 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:03.988056 | orchestrator | 2026-04-17 01:00:03 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:03.988092 | orchestrator | 2026-04-17 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:07.018151 | orchestrator | 2026-04-17 01:00:07 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:07.018643 | orchestrator | 2026-04-17 01:00:07 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:07.020600 | orchestrator | 2026-04-17 01:00:07 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:07.021243 | orchestrator | 2026-04-17 01:00:07 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:07.021332 | orchestrator | 2026-04-17 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:10.052613 | orchestrator | 2026-04-17 01:00:10 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:10.054407 | orchestrator | 2026-04-17 01:00:10 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:10.055199 | orchestrator | 2026-04-17 01:00:10 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:10.055803 | orchestrator | 2026-04-17 01:00:10 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:10.055930 | orchestrator | 2026-04-17 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:13.078502 | orchestrator | 2026-04-17 01:00:13 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:13.078843 | orchestrator | 2026-04-17 01:00:13 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:13.079633 | orchestrator | 2026-04-17 01:00:13 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:13.080384 | orchestrator | 2026-04-17 01:00:13 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:13.080415 | orchestrator | 2026-04-17 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:16.106710 | orchestrator | 2026-04-17 01:00:16 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:16.107066 | orchestrator | 2026-04-17 01:00:16 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:16.107989 | orchestrator | 2026-04-17 01:00:16 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:16.108440 | orchestrator | 2026-04-17 01:00:16 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:16.108590 | orchestrator | 2026-04-17 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:19.136753 | orchestrator | 2026-04-17 01:00:19 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:19.137193 | orchestrator | 2026-04-17 01:00:19 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:19.139057 | orchestrator | 2026-04-17 01:00:19 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:19.139758 | orchestrator | 2026-04-17 01:00:19 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:19.139818 | orchestrator | 2026-04-17 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:22.258140 | orchestrator | 2026-04-17 01:00:22 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:22.259244 | orchestrator | 2026-04-17 01:00:22 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:22.260897 | orchestrator | 2026-04-17 01:00:22 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:22.261569 | orchestrator | 2026-04-17 01:00:22 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:22.261631 | orchestrator | 2026-04-17 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:25.290459 | orchestrator | 2026-04-17 01:00:25 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:25.290933 | orchestrator | 2026-04-17 01:00:25 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:25.292089 | orchestrator | 2026-04-17 01:00:25 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:25.293084 | orchestrator | 2026-04-17 01:00:25 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:25.293132 | orchestrator | 2026-04-17 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:28.321049 | orchestrator | 2026-04-17 01:00:28 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:28.321213 | orchestrator | 2026-04-17 01:00:28 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:28.321697 | orchestrator | 2026-04-17 01:00:28 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:28.322451 | orchestrator | 2026-04-17 01:00:28 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:28.322483 | orchestrator | 2026-04-17 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:31.364249 | orchestrator | 2026-04-17 01:00:31 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:31.364518 | orchestrator | 2026-04-17 01:00:31 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:31.365101 | orchestrator | 2026-04-17 01:00:31 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:31.365806 | orchestrator | 2026-04-17 01:00:31 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:31.365919 | orchestrator | 2026-04-17 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:34.392164 | orchestrator | 2026-04-17 01:00:34 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:34.394257 | orchestrator | 2026-04-17 01:00:34 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:34.394332 | orchestrator | 2026-04-17 01:00:34 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:34.394340 | orchestrator | 2026-04-17 01:00:34 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:34.394348 | orchestrator | 2026-04-17 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:37.411308 | orchestrator | 2026-04-17 01:00:37 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:37.411418 | orchestrator | 2026-04-17 01:00:37 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:37.411929 | orchestrator | 2026-04-17 01:00:37 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:37.412700 | orchestrator | 2026-04-17 01:00:37 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:37.412764 | orchestrator | 2026-04-17 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:40.437258 | orchestrator | 2026-04-17 01:00:40 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:40.437356 | orchestrator | 2026-04-17 01:00:40 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:40.437367 | orchestrator | 2026-04-17 01:00:40 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:40.437404 | orchestrator | 2026-04-17 01:00:40 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:40.437413 | orchestrator | 2026-04-17 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:43.464846 | orchestrator | 2026-04-17 01:00:43 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:43.465707 | orchestrator | 2026-04-17 01:00:43 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:43.466827 | orchestrator | 2026-04-17 01:00:43 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:43.469802 | orchestrator | 2026-04-17 01:00:43 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:43.469920 | orchestrator | 2026-04-17 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:46.504192 | orchestrator | 2026-04-17 01:00:46 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:46.504543 | orchestrator | 2026-04-17 01:00:46 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:46.505623 | orchestrator | 2026-04-17 01:00:46 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:46.506466 | orchestrator | 2026-04-17 01:00:46 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:46.506501 | orchestrator | 2026-04-17 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:49.573261 | orchestrator | 2026-04-17 01:00:49 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:49.573379 | orchestrator | 2026-04-17 01:00:49 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:49.574291 | orchestrator | 2026-04-17 01:00:49 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:49.575436 | orchestrator | 2026-04-17 01:00:49 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:49.575478 | orchestrator | 2026-04-17 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:52.619460 | orchestrator | 2026-04-17 01:00:52 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:52.620021 | orchestrator | 2026-04-17 01:00:52 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:52.621982 | orchestrator | 2026-04-17 01:00:52 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:52.622192 | orchestrator | 2026-04-17 01:00:52 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:52.622210 | orchestrator | 2026-04-17 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:55.672793 | orchestrator | 2026-04-17 01:00:55 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:55.676448 | orchestrator | 2026-04-17 01:00:55 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:55.680053 | orchestrator | 2026-04-17 01:00:55 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:55.680328 | orchestrator | 2026-04-17 01:00:55 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:55.680347 | orchestrator | 2026-04-17 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:00:58.710977 | orchestrator | 2026-04-17 01:00:58 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:00:58.712361 | orchestrator | 2026-04-17 01:00:58 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:00:58.715189 | orchestrator | 2026-04-17 01:00:58 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:00:58.719006 | orchestrator | 2026-04-17 01:00:58 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:00:58.719349 | orchestrator | 2026-04-17 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:01.752474 | orchestrator | 2026-04-17 01:01:01 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:01.754215 | orchestrator | 2026-04-17 01:01:01 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:01.755964 | orchestrator | 2026-04-17 01:01:01 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:01.757738 | orchestrator | 2026-04-17 01:01:01 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:01.757795 | orchestrator | 2026-04-17 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:04.797669 | orchestrator | 2026-04-17 01:01:04 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:04.799743 | orchestrator | 2026-04-17 01:01:04 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:04.801554 | orchestrator | 2026-04-17 01:01:04 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:04.803411 | orchestrator | 2026-04-17 01:01:04 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:04.803460 | orchestrator | 2026-04-17 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:07.844395 | orchestrator | 2026-04-17 01:01:07 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:07.844498 | orchestrator | 2026-04-17 01:01:07 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:07.844529 | orchestrator | 2026-04-17 01:01:07 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:07.845399 | orchestrator | 2026-04-17 01:01:07 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:07.845446 | orchestrator | 2026-04-17 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:10.889191 | orchestrator | 2026-04-17 01:01:10 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:10.889251 | orchestrator | 2026-04-17 01:01:10 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:10.889261 | orchestrator | 2026-04-17 01:01:10 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:10.889268 | orchestrator | 2026-04-17 01:01:10 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:10.889275 | orchestrator | 2026-04-17 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:13.936744 | orchestrator | 2026-04-17 01:01:13 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:13.938864 | orchestrator | 2026-04-17 01:01:13 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:13.941746 | orchestrator | 2026-04-17 01:01:13 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:13.944541 | orchestrator | 2026-04-17 01:01:13 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:13.944595 | orchestrator | 2026-04-17 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:16.970633 | orchestrator | 2026-04-17 01:01:16 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:16.970741 | orchestrator | 2026-04-17 01:01:16 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:16.970900 | orchestrator | 2026-04-17 01:01:16 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:16.971636 | orchestrator | 2026-04-17 01:01:16 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:16.972659 | orchestrator | 2026-04-17 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:20.035897 | orchestrator | 2026-04-17 01:01:20 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:20.036814 | orchestrator | 2026-04-17 01:01:20 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:20.038152 | orchestrator | 2026-04-17 01:01:20 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:20.039609 | orchestrator | 2026-04-17 01:01:20 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:20.039851 | orchestrator | 2026-04-17 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:23.087949 | orchestrator | 2026-04-17 01:01:23 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:23.091213 | orchestrator | 2026-04-17 01:01:23 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:23.093310 | orchestrator | 2026-04-17 01:01:23 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:23.096184 | orchestrator | 2026-04-17 01:01:23 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:23.096264 | orchestrator | 2026-04-17 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:26.139918 | orchestrator | 2026-04-17 01:01:26 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:26.140026 | orchestrator | 2026-04-17 01:01:26 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:26.140611 | orchestrator | 2026-04-17 01:01:26 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:26.141560 | orchestrator | 2026-04-17 01:01:26 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:26.141591 | orchestrator | 2026-04-17 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:29.184762 | orchestrator | 2026-04-17 01:01:29 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:29.186457 | orchestrator | 2026-04-17 01:01:29 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:29.189118 | orchestrator | 2026-04-17 01:01:29 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:29.190401 | orchestrator | 2026-04-17 01:01:29 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:29.190452 | orchestrator | 2026-04-17 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:32.248912 | orchestrator | 2026-04-17 01:01:32 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:32.251937 | orchestrator | 2026-04-17 01:01:32 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:32.254340 | orchestrator | 2026-04-17 01:01:32 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:32.256225 | orchestrator | 2026-04-17 01:01:32 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:32.256522 | orchestrator | 2026-04-17 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:35.300372 | orchestrator | 2026-04-17 01:01:35 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:35.301270 | orchestrator | 2026-04-17 01:01:35 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:35.303448 | orchestrator | 2026-04-17 01:01:35 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:35.305689 | orchestrator | 2026-04-17 01:01:35 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:35.305815 | orchestrator | 2026-04-17 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:38.347867 | orchestrator | 2026-04-17 01:01:38 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:38.351804 | orchestrator | 2026-04-17 01:01:38 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:38.355058 | orchestrator | 2026-04-17 01:01:38 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:38.358357 | orchestrator | 2026-04-17 01:01:38 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:38.358658 | orchestrator | 2026-04-17 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:41.487944 | orchestrator | 2026-04-17 01:01:41 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:41.488592 | orchestrator | 2026-04-17 01:01:41 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:41.488821 | orchestrator | 2026-04-17 01:01:41 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:41.489781 | orchestrator | 2026-04-17 01:01:41 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:41.489858 | orchestrator | 2026-04-17 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:44.518563 | orchestrator | 2026-04-17 01:01:44 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:44.518838 | orchestrator | 2026-04-17 01:01:44 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:44.519557 | orchestrator | 2026-04-17 01:01:44 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:44.520392 | orchestrator | 2026-04-17 01:01:44 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:44.520488 | orchestrator | 2026-04-17 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:47.545513 | orchestrator | 2026-04-17 01:01:47 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:47.545692 | orchestrator | 2026-04-17 01:01:47 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:47.546717 | orchestrator | 2026-04-17 01:01:47 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:47.547280 | orchestrator | 2026-04-17 01:01:47 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:47.547341 | orchestrator | 2026-04-17 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:50.610172 | orchestrator | 2026-04-17 01:01:50 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state STARTED 2026-04-17 01:01:50.610682 | orchestrator | 2026-04-17 01:01:50 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:50.611494 | orchestrator | 2026-04-17 01:01:50 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:50.612549 | orchestrator | 2026-04-17 01:01:50 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:50.612586 | orchestrator | 2026-04-17 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:53.643174 | orchestrator | 2026-04-17 01:01:53 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:01:53.645279 | orchestrator | 2026-04-17 01:01:53 | INFO  | Task ec4a9249-1bf1-4a55-8e52-e45dc273561e is in state SUCCESS 2026-04-17 01:01:53.646568 | orchestrator | 2026-04-17 01:01:53.646668 | orchestrator | 2026-04-17 01:01:53.646677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:01:53.646683 | orchestrator | 2026-04-17 01:01:53.646687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:01:53.646691 | orchestrator | Friday 17 April 2026 00:59:11 +0000 (0:00:00.322) 0:00:00.322 ********** 2026-04-17 01:01:53.646695 | orchestrator | ok: [testbed-manager] 2026-04-17 01:01:53.646701 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:01:53.646705 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:01:53.646709 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:01:53.646713 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:01:53.646717 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:01:53.646720 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:01:53.646724 | orchestrator | 2026-04-17 01:01:53.646728 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:01:53.646732 | orchestrator | Friday 17 April 2026 00:59:11 +0000 (0:00:00.710) 0:00:01.032 ********** 2026-04-17 01:01:53.646736 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646741 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646745 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646748 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646752 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646756 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646760 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-17 01:01:53.646764 | orchestrator | 2026-04-17 01:01:53.646804 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-17 01:01:53.646810 | orchestrator | 2026-04-17 01:01:53.646832 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 01:01:53.646844 | orchestrator | Friday 17 April 2026 00:59:12 +0000 (0:00:00.817) 0:00:01.850 ********** 2026-04-17 01:01:53.646879 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:01:53.646887 | orchestrator | 2026-04-17 01:01:53.646894 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-17 01:01:53.646900 | orchestrator | Friday 17 April 2026 00:59:13 +0000 (0:00:01.028) 0:00:02.879 ********** 2026-04-17 01:01:53.646910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 01:01:53.646942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.646950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.646970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.646991 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647057 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647092 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 01:01:53.647099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647237 | orchestrator | 2026-04-17 01:01:53.647242 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-17 01:01:53.647246 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:03.763) 0:00:06.642 ********** 2026-04-17 01:01:53.647251 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:01:53.647256 | orchestrator | 2026-04-17 01:01:53.647260 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-17 01:01:53.647268 | orchestrator | Friday 17 April 2026 00:59:18 +0000 (0:00:01.280) 0:00:07.923 ********** 2026-04-17 01:01:53.647273 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 01:01:53.647278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647357 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.647362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647393 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 01:01:53.647542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647577 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.647581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.647588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648126 | orchestrator | 2026-04-17 01:01:53.648133 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-17 01:01:53.648140 | orchestrator | Friday 17 April 2026 00:59:24 +0000 (0:00:05.838) 0:00:13.761 ********** 2026-04-17 01:01:53.648148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 01:01:53.648154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 01:01:53.648266 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648270 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.648281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648324 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.648329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648360 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.648364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648390 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.648398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648414 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.648418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648426 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.648430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648452 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.648456 | orchestrator | 2026-04-17 01:01:53.648460 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-17 01:01:53.648464 | orchestrator | Friday 17 April 2026 00:59:26 +0000 (0:00:01.428) 0:00:15.189 ********** 2026-04-17 01:01:53.648467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-17 01:01:53.648502 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-17 01:01:53.648515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648547 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.648551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-17 01:01:53.648576 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.648580 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.648586 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.648593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648606 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.648610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648624 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.648629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-17 01:01:53.648636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-17 01:01:53.648650 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.648654 | orchestrator | 2026-04-17 01:01:53.648659 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-17 01:01:53.648663 | orchestrator | Friday 17 April 2026 00:59:28 +0000 (0:00:02.082) 0:00:17.272 ********** 2026-04-17 01:01:53.648668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648673 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 01:01:53.648678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.648743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648762 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648810 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 01:01:53.648823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648858 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.648871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.648888 | orchestrator | 2026-04-17 01:01:53.648892 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-17 01:01:53.648897 | orchestrator | Friday 17 April 2026 00:59:34 +0000 (0:00:06.494) 0:00:23.766 ********** 2026-04-17 01:01:53.648901 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:01:53.648906 | orchestrator | 2026-04-17 01:01:53.648910 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-17 01:01:53.648918 | orchestrator | Friday 17 April 2026 00:59:35 +0000 (0:00:00.929) 0:00:24.696 ********** 2026-04-17 01:01:53.648925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648955 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648963 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648970 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.648985 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649037 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649058 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649065 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649071 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1085646, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3372219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649079 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649096 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649104 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649111 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649122 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649129 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649136 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649143 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649166 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649191 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649205 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649212 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1085688, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3424716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649222 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649391 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649406 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649410 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649418 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649422 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649444 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649451 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649455 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649459 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649463 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649467 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649488 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649497 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1085636, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3368478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649513 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649521 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649531 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649541 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649568 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649576 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649583 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649589 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649596 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649612 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1085667, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3394384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649650 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649655 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649662 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649677 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649713 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649719 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649733 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649739 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649752 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649781 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649787 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649794 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649800 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649806 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649819 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649840 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649847 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649853 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1085630, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649859 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649865 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649872 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649885 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649894 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649901 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649907 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649913 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649925 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649936 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649947 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649953 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649965 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649971 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.649981 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1085650, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.33757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.649989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650097 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650107 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650114 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.650122 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650131 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650138 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650156 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650161 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.650169 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650174 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.650178 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650187 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650192 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.650200 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650204 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650219 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650224 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650229 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1085663, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3389041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650234 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-17 01:01:53.650238 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.650243 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1085653, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.337904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1085644, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.336904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650257 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085686, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3413503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650264 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085626, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3345833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1085712, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3445385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650277 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1085673, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.339904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650281 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1085632, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.335197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650286 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1085628, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.334904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1085658, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3385344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1085656, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3382974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1085710, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3443055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-17 01:01:53.650312 | orchestrator | 2026-04-17 01:01:53.650317 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-17 01:01:53.650322 | orchestrator | Friday 17 April 2026 00:59:58 +0000 (0:00:22.774) 0:00:47.471 ********** 2026-04-17 01:01:53.650326 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:01:53.650331 | orchestrator | 2026-04-17 01:01:53.650335 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-17 01:01:53.650339 | orchestrator | Friday 17 April 2026 00:59:59 +0000 (0:00:00.704) 0:00:48.175 ********** 2026-04-17 01:01:53.650346 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650351 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650356 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650360 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650365 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650369 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:01:53.650374 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650378 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650383 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650392 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650396 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:01:53.650401 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650406 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650413 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650423 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650427 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-17 01:01:53.650430 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650434 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650438 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650442 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650446 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650449 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 01:01:53.650453 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650457 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650461 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650464 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650468 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650472 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-17 01:01:53.650476 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650483 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650487 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650491 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650503 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 01:01:53.650507 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.650517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650522 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-17 01:01:53.650528 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-17 01:01:53.650534 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-17 01:01:53.650540 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 01:01:53.650546 | orchestrator | 2026-04-17 01:01:53.650552 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-17 01:01:53.650557 | orchestrator | Friday 17 April 2026 01:00:00 +0000 (0:00:01.873) 0:00:50.048 ********** 2026-04-17 01:01:53.650564 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650572 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650578 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.650584 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.650589 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650595 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.650601 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650606 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.650612 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650618 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650624 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-17 01:01:53.650630 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.650639 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-17 01:01:53.650651 | orchestrator | 2026-04-17 01:01:53.650657 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-17 01:01:53.650664 | orchestrator | Friday 17 April 2026 01:00:16 +0000 (0:00:16.060) 0:01:06.108 ********** 2026-04-17 01:01:53.650670 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650676 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.650682 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650693 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.650700 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650706 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.650712 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650718 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650723 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650729 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.650736 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-17 01:01:53.650743 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.650749 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-17 01:01:53.650756 | orchestrator | 2026-04-17 01:01:53.650763 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-17 01:01:53.650769 | orchestrator | Friday 17 April 2026 01:00:20 +0000 (0:00:03.065) 0:01:09.174 ********** 2026-04-17 01:01:53.650776 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650782 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.650789 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650795 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.650802 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650808 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.650815 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650822 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.650829 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650835 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.650842 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-17 01:01:53.650849 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-17 01:01:53.650856 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650862 | orchestrator | 2026-04-17 01:01:53.650868 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-17 01:01:53.650875 | orchestrator | Friday 17 April 2026 01:00:22 +0000 (0:00:01.961) 0:01:11.135 ********** 2026-04-17 01:01:53.650881 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:01:53.650888 | orchestrator | 2026-04-17 01:01:53.650894 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-17 01:01:53.650900 | orchestrator | Friday 17 April 2026 01:00:22 +0000 (0:00:00.770) 0:01:11.905 ********** 2026-04-17 01:01:53.650907 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.650919 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.650927 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.650934 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.650940 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.650946 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650953 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.650959 | orchestrator | 2026-04-17 01:01:53.650966 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-17 01:01:53.650972 | orchestrator | Friday 17 April 2026 01:00:23 +0000 (0:00:00.846) 0:01:12.752 ********** 2026-04-17 01:01:53.650979 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.650985 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.650991 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.651016 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.651022 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.651028 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.651034 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.651041 | orchestrator | 2026-04-17 01:01:53.651047 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-17 01:01:53.651054 | orchestrator | Friday 17 April 2026 01:00:25 +0000 (0:00:02.256) 0:01:15.008 ********** 2026-04-17 01:01:53.651061 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651067 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.651074 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651080 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.651090 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651097 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.651103 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651109 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.651115 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651122 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.651133 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651139 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.651146 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-17 01:01:53.651152 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.651158 | orchestrator | 2026-04-17 01:01:53.651165 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-17 01:01:53.651171 | orchestrator | Friday 17 April 2026 01:00:28 +0000 (0:00:02.114) 0:01:17.123 ********** 2026-04-17 01:01:53.651177 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651184 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651191 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651196 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.651202 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.651209 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.651215 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651221 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.651228 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-17 01:01:53.651234 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651246 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.651252 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-17 01:01:53.651257 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.651264 | orchestrator | 2026-04-17 01:01:53.651270 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-17 01:01:53.651277 | orchestrator | Friday 17 April 2026 01:00:29 +0000 (0:00:01.969) 0:01:19.092 ********** 2026-04-17 01:01:53.651284 | orchestrator | [WARNING]: Skipped 2026-04-17 01:01:53.651292 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-17 01:01:53.651298 | orchestrator | due to this access issue: 2026-04-17 01:01:53.651305 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-17 01:01:53.651312 | orchestrator | not a directory 2026-04-17 01:01:53.651319 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:01:53.651326 | orchestrator | 2026-04-17 01:01:53.651333 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-17 01:01:53.651340 | orchestrator | Friday 17 April 2026 01:00:30 +0000 (0:00:00.963) 0:01:20.056 ********** 2026-04-17 01:01:53.651347 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.651353 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.651359 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.651365 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.651372 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.651378 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.651385 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.651392 | orchestrator | 2026-04-17 01:01:53.651399 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-17 01:01:53.651406 | orchestrator | Friday 17 April 2026 01:00:31 +0000 (0:00:00.613) 0:01:20.669 ********** 2026-04-17 01:01:53.651413 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.651419 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:01:53.651426 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:01:53.651433 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:01:53.651440 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:01:53.651447 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:01:53.651454 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:01:53.651460 | orchestrator | 2026-04-17 01:01:53.651466 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-17 01:01:53.651473 | orchestrator | Friday 17 April 2026 01:00:32 +0000 (0:00:00.693) 0:01:21.362 ********** 2026-04-17 01:01:53.651484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-17 01:01:53.651501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651550 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-17 01:01:53.651586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-17 01:01:53.651711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651759 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-17 01:01:53.651814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-17 01:01:53.651835 | orchestrator | 2026-04-17 01:01:53.651841 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-17 01:01:53.651847 | orchestrator | Friday 17 April 2026 01:00:36 +0000 (0:00:04.704) 0:01:26.067 ********** 2026-04-17 01:01:53.651854 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-17 01:01:53.651861 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:01:53.651867 | orchestrator | 2026-04-17 01:01:53.651874 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651880 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:01.545) 0:01:27.613 ********** 2026-04-17 01:01:53.651886 | orchestrator | 2026-04-17 01:01:53.651893 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651900 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.092) 0:01:27.705 ********** 2026-04-17 01:01:53.651907 | orchestrator | 2026-04-17 01:01:53.651914 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651921 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.065) 0:01:27.771 ********** 2026-04-17 01:01:53.651932 | orchestrator | 2026-04-17 01:01:53.651939 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651946 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.077) 0:01:27.848 ********** 2026-04-17 01:01:53.651953 | orchestrator | 2026-04-17 01:01:53.651960 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651967 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.065) 0:01:27.913 ********** 2026-04-17 01:01:53.651974 | orchestrator | 2026-04-17 01:01:53.651981 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.651988 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.065) 0:01:27.978 ********** 2026-04-17 01:01:53.652014 | orchestrator | 2026-04-17 01:01:53.652020 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-17 01:01:53.652026 | orchestrator | Friday 17 April 2026 01:00:38 +0000 (0:00:00.063) 0:01:28.042 ********** 2026-04-17 01:01:53.652032 | orchestrator | 2026-04-17 01:01:53.652042 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-17 01:01:53.652049 | orchestrator | Friday 17 April 2026 01:00:39 +0000 (0:00:00.128) 0:01:28.170 ********** 2026-04-17 01:01:53.652056 | orchestrator | changed: [testbed-manager] 2026-04-17 01:01:53.652062 | orchestrator | 2026-04-17 01:01:53.652070 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-17 01:01:53.652077 | orchestrator | Friday 17 April 2026 01:00:52 +0000 (0:00:13.140) 0:01:41.311 ********** 2026-04-17 01:01:53.652084 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.652091 | orchestrator | changed: [testbed-manager] 2026-04-17 01:01:53.652099 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:01:53.652110 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.652117 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.652124 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:01:53.652131 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:01:53.652138 | orchestrator | 2026-04-17 01:01:53.652145 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-17 01:01:53.652152 | orchestrator | Friday 17 April 2026 01:01:05 +0000 (0:00:13.724) 0:01:55.035 ********** 2026-04-17 01:01:53.652159 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.652165 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.652172 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.652178 | orchestrator | 2026-04-17 01:01:53.652185 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-17 01:01:53.652191 | orchestrator | Friday 17 April 2026 01:01:10 +0000 (0:00:04.345) 0:01:59.381 ********** 2026-04-17 01:01:53.652199 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.652205 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.652212 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.652219 | orchestrator | 2026-04-17 01:01:53.652226 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-17 01:01:53.652233 | orchestrator | Friday 17 April 2026 01:01:15 +0000 (0:00:04.789) 0:02:04.170 ********** 2026-04-17 01:01:53.652240 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.652247 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:01:53.652254 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:01:53.652260 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:01:53.652268 | orchestrator | changed: [testbed-manager] 2026-04-17 01:01:53.652275 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.652281 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.652288 | orchestrator | 2026-04-17 01:01:53.652295 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-17 01:01:53.652302 | orchestrator | Friday 17 April 2026 01:01:26 +0000 (0:00:11.155) 0:02:15.325 ********** 2026-04-17 01:01:53.652309 | orchestrator | changed: [testbed-manager] 2026-04-17 01:01:53.652316 | orchestrator | 2026-04-17 01:01:53.652322 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-17 01:01:53.652334 | orchestrator | Friday 17 April 2026 01:01:36 +0000 (0:00:10.058) 0:02:25.384 ********** 2026-04-17 01:01:53.652342 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:01:53.652348 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:01:53.652355 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:01:53.652362 | orchestrator | 2026-04-17 01:01:53.652368 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-17 01:01:53.652375 | orchestrator | Friday 17 April 2026 01:01:41 +0000 (0:00:05.641) 0:02:31.025 ********** 2026-04-17 01:01:53.652382 | orchestrator | changed: [testbed-manager] 2026-04-17 01:01:53.652389 | orchestrator | 2026-04-17 01:01:53.652396 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-17 01:01:53.652403 | orchestrator | Friday 17 April 2026 01:01:46 +0000 (0:00:04.807) 0:02:35.833 ********** 2026-04-17 01:01:53.652409 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:01:53.652417 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:01:53.652424 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:01:53.652430 | orchestrator | 2026-04-17 01:01:53.652438 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:01:53.652446 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-17 01:01:53.652453 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 01:01:53.652461 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 01:01:53.652467 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-17 01:01:53.652474 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 01:01:53.652481 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 01:01:53.652489 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-17 01:01:53.652496 | orchestrator | 2026-04-17 01:01:53.652503 | orchestrator | 2026-04-17 01:01:53.652510 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:01:53.652517 | orchestrator | Friday 17 April 2026 01:01:52 +0000 (0:00:05.559) 0:02:41.392 ********** 2026-04-17 01:01:53.652524 | orchestrator | =============================================================================== 2026-04-17 01:01:53.652531 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.77s 2026-04-17 01:01:53.652541 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.06s 2026-04-17 01:01:53.652549 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.72s 2026-04-17 01:01:53.652556 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.14s 2026-04-17 01:01:53.652563 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 11.16s 2026-04-17 01:01:53.652570 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 10.06s 2026-04-17 01:01:53.652577 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.49s 2026-04-17 01:01:53.652589 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.84s 2026-04-17 01:01:53.652596 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.64s 2026-04-17 01:01:53.652603 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.56s 2026-04-17 01:01:53.652611 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.81s 2026-04-17 01:01:53.652623 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.79s 2026-04-17 01:01:53.652630 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.70s 2026-04-17 01:01:53.652636 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.35s 2026-04-17 01:01:53.652643 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.76s 2026-04-17 01:01:53.652651 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.07s 2026-04-17 01:01:53.652667 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.26s 2026-04-17 01:01:53.652674 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.11s 2026-04-17 01:01:53.652679 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.08s 2026-04-17 01:01:53.652685 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.97s 2026-04-17 01:01:53.652691 | orchestrator | 2026-04-17 01:01:53 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:53.652697 | orchestrator | 2026-04-17 01:01:53 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:53.652703 | orchestrator | 2026-04-17 01:01:53 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:53.652708 | orchestrator | 2026-04-17 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:56.669679 | orchestrator | 2026-04-17 01:01:56 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:01:56.670638 | orchestrator | 2026-04-17 01:01:56 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:56.671087 | orchestrator | 2026-04-17 01:01:56 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:56.671918 | orchestrator | 2026-04-17 01:01:56 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:56.671942 | orchestrator | 2026-04-17 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:01:59.711287 | orchestrator | 2026-04-17 01:01:59 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:01:59.714114 | orchestrator | 2026-04-17 01:01:59 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:01:59.716488 | orchestrator | 2026-04-17 01:01:59 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:01:59.718371 | orchestrator | 2026-04-17 01:01:59 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:01:59.718497 | orchestrator | 2026-04-17 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:02.750687 | orchestrator | 2026-04-17 01:02:02 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:02.752881 | orchestrator | 2026-04-17 01:02:02 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:02.755530 | orchestrator | 2026-04-17 01:02:02 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:02:02.757910 | orchestrator | 2026-04-17 01:02:02 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:02.757974 | orchestrator | 2026-04-17 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:05.804876 | orchestrator | 2026-04-17 01:02:05 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:05.806984 | orchestrator | 2026-04-17 01:02:05 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:05.809147 | orchestrator | 2026-04-17 01:02:05 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state STARTED 2026-04-17 01:02:05.811413 | orchestrator | 2026-04-17 01:02:05 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:05.811464 | orchestrator | 2026-04-17 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:08.850316 | orchestrator | 2026-04-17 01:02:08 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:08.850896 | orchestrator | 2026-04-17 01:02:08 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:08.851834 | orchestrator | 2026-04-17 01:02:08 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:08.853674 | orchestrator | 2026-04-17 01:02:08 | INFO  | Task 6a141cad-22da-4843-a106-d17648b727a9 is in state SUCCESS 2026-04-17 01:02:08.855102 | orchestrator | 2026-04-17 01:02:08.855149 | orchestrator | 2026-04-17 01:02:08.855259 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:02:08.855269 | orchestrator | 2026-04-17 01:02:08.855275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:02:08.855280 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-04-17 01:02:08.855286 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:02:08.855291 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:02:08.855297 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:02:08.855302 | orchestrator | 2026-04-17 01:02:08.855307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:02:08.855313 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:00.235) 0:00:00.483 ********** 2026-04-17 01:02:08.855318 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-17 01:02:08.855324 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-17 01:02:08.855330 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-17 01:02:08.855335 | orchestrator | 2026-04-17 01:02:08.855340 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-17 01:02:08.855346 | orchestrator | 2026-04-17 01:02:08.855351 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 01:02:08.855356 | orchestrator | Friday 17 April 2026 00:59:17 +0000 (0:00:00.342) 0:00:00.826 ********** 2026-04-17 01:02:08.855362 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:02:08.855367 | orchestrator | 2026-04-17 01:02:08.855372 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-17 01:02:08.855378 | orchestrator | Friday 17 April 2026 00:59:18 +0000 (0:00:00.614) 0:00:01.441 ********** 2026-04-17 01:02:08.855384 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-17 01:02:08.855389 | orchestrator | 2026-04-17 01:02:08.855395 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-17 01:02:08.855400 | orchestrator | Friday 17 April 2026 00:59:23 +0000 (0:00:04.773) 0:00:06.214 ********** 2026-04-17 01:02:08.855405 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-17 01:02:08.855411 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-17 01:02:08.855416 | orchestrator | 2026-04-17 01:02:08.855421 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-17 01:02:08.855427 | orchestrator | Friday 17 April 2026 00:59:31 +0000 (0:00:07.660) 0:00:13.875 ********** 2026-04-17 01:02:08.855432 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-17 01:02:08.855437 | orchestrator | 2026-04-17 01:02:08.855443 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-17 01:02:08.855448 | orchestrator | Friday 17 April 2026 00:59:34 +0000 (0:00:03.783) 0:00:17.659 ********** 2026-04-17 01:02:08.855466 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-17 01:02:08.855471 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:02:08.855477 | orchestrator | 2026-04-17 01:02:08.855482 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-17 01:02:08.855487 | orchestrator | Friday 17 April 2026 00:59:39 +0000 (0:00:05.010) 0:00:22.669 ********** 2026-04-17 01:02:08.855493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:02:08.855498 | orchestrator | 2026-04-17 01:02:08.855503 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-17 01:02:08.855508 | orchestrator | Friday 17 April 2026 00:59:43 +0000 (0:00:03.727) 0:00:26.397 ********** 2026-04-17 01:02:08.855514 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-17 01:02:08.855519 | orchestrator | 2026-04-17 01:02:08.855524 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-17 01:02:08.855530 | orchestrator | Friday 17 April 2026 00:59:48 +0000 (0:00:04.645) 0:00:31.042 ********** 2026-04-17 01:02:08.855553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855583 | orchestrator | 2026-04-17 01:02:08.855588 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 01:02:08.855594 | orchestrator | Friday 17 April 2026 00:59:51 +0000 (0:00:03.262) 0:00:34.305 ********** 2026-04-17 01:02:08.855599 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:02:08.855604 | orchestrator | 2026-04-17 01:02:08.855610 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-17 01:02:08.855620 | orchestrator | Friday 17 April 2026 00:59:52 +0000 (0:00:00.593) 0:00:34.899 ********** 2026-04-17 01:02:08.855626 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:02:08.855631 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.855637 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:02:08.855642 | orchestrator | 2026-04-17 01:02:08.855647 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-17 01:02:08.855653 | orchestrator | Friday 17 April 2026 00:59:55 +0000 (0:00:03.482) 0:00:38.381 ********** 2026-04-17 01:02:08.855658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855674 | orchestrator | 2026-04-17 01:02:08.855683 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-17 01:02:08.855694 | orchestrator | Friday 17 April 2026 00:59:57 +0000 (0:00:01.515) 0:00:39.896 ********** 2026-04-17 01:02:08.855702 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855725 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:02:08.855735 | orchestrator | 2026-04-17 01:02:08.855745 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-17 01:02:08.855755 | orchestrator | Friday 17 April 2026 00:59:58 +0000 (0:00:01.295) 0:00:41.192 ********** 2026-04-17 01:02:08.855764 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:02:08.855774 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:02:08.855784 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:02:08.855794 | orchestrator | 2026-04-17 01:02:08.855801 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-17 01:02:08.855806 | orchestrator | Friday 17 April 2026 00:59:59 +0000 (0:00:00.718) 0:00:41.911 ********** 2026-04-17 01:02:08.855811 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.855817 | orchestrator | 2026-04-17 01:02:08.855822 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-17 01:02:08.855828 | orchestrator | Friday 17 April 2026 00:59:59 +0000 (0:00:00.116) 0:00:42.028 ********** 2026-04-17 01:02:08.855833 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.855838 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.855844 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.855849 | orchestrator | 2026-04-17 01:02:08.855855 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 01:02:08.855860 | orchestrator | Friday 17 April 2026 00:59:59 +0000 (0:00:00.237) 0:00:42.265 ********** 2026-04-17 01:02:08.855865 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:02:08.855871 | orchestrator | 2026-04-17 01:02:08.855876 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-17 01:02:08.855881 | orchestrator | Friday 17 April 2026 01:00:00 +0000 (0:00:01.019) 0:00:43.285 ********** 2026-04-17 01:02:08.855891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.855921 | orchestrator | 2026-04-17 01:02:08.855927 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-17 01:02:08.855935 | orchestrator | Friday 17 April 2026 01:00:05 +0000 (0:00:04.642) 0:00:47.928 ********** 2026-04-17 01:02:08.855946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.855961 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.855967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.855974 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.855986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.855996 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856002 | orchestrator | 2026-04-17 01:02:08.856007 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-17 01:02:08.856030 | orchestrator | Friday 17 April 2026 01:00:08 +0000 (0:00:03.288) 0:00:51.216 ********** 2026-04-17 01:02:08.856041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.856051 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.856085 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-17 01:02:08.856108 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856113 | orchestrator | 2026-04-17 01:02:08.856119 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-17 01:02:08.856125 | orchestrator | Friday 17 April 2026 01:00:12 +0000 (0:00:03.668) 0:00:54.884 ********** 2026-04-17 01:02:08.856130 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856136 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856141 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856146 | orchestrator | 2026-04-17 01:02:08.856152 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-17 01:02:08.856157 | orchestrator | Friday 17 April 2026 01:00:14 +0000 (0:00:02.950) 0:00:57.835 ********** 2026-04-17 01:02:08.856166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856194 | orchestrator | 2026-04-17 01:02:08.856199 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-17 01:02:08.856205 | orchestrator | Friday 17 April 2026 01:00:19 +0000 (0:00:04.717) 0:01:02.552 ********** 2026-04-17 01:02:08.856210 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:02:08.856216 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:02:08.856225 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856230 | orchestrator | 2026-04-17 01:02:08.856236 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-17 01:02:08.856241 | orchestrator | Friday 17 April 2026 01:00:26 +0000 (0:00:06.372) 0:01:08.925 ********** 2026-04-17 01:02:08.856249 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856255 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856260 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856266 | orchestrator | 2026-04-17 01:02:08.856271 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-17 01:02:08.856277 | orchestrator | Friday 17 April 2026 01:00:30 +0000 (0:00:04.390) 0:01:13.315 ********** 2026-04-17 01:02:08.856282 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856287 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856293 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856298 | orchestrator | 2026-04-17 01:02:08.856304 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-17 01:02:08.856310 | orchestrator | Friday 17 April 2026 01:00:33 +0000 (0:00:03.302) 0:01:16.617 ********** 2026-04-17 01:02:08.856316 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856322 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856330 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856336 | orchestrator | 2026-04-17 01:02:08.856342 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-17 01:02:08.856348 | orchestrator | Friday 17 April 2026 01:00:36 +0000 (0:00:02.788) 0:01:19.405 ********** 2026-04-17 01:02:08.856354 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856360 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856365 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856371 | orchestrator | 2026-04-17 01:02:08.856378 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-17 01:02:08.856383 | orchestrator | Friday 17 April 2026 01:00:40 +0000 (0:00:03.705) 0:01:23.111 ********** 2026-04-17 01:02:08.856389 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856396 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856402 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856407 | orchestrator | 2026-04-17 01:02:08.856413 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-17 01:02:08.856419 | orchestrator | Friday 17 April 2026 01:00:40 +0000 (0:00:00.505) 0:01:23.617 ********** 2026-04-17 01:02:08.856425 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 01:02:08.856431 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856437 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 01:02:08.856443 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856448 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-17 01:02:08.856454 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856460 | orchestrator | 2026-04-17 01:02:08.856466 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-17 01:02:08.856472 | orchestrator | Friday 17 April 2026 01:00:44 +0000 (0:00:03.807) 0:01:27.424 ********** 2026-04-17 01:02:08.856477 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856483 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856489 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856495 | orchestrator | 2026-04-17 01:02:08.856501 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-17 01:02:08.856507 | orchestrator | Friday 17 April 2026 01:00:48 +0000 (0:00:04.016) 0:01:31.440 ********** 2026-04-17 01:02:08.856512 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856518 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856524 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856530 | orchestrator | 2026-04-17 01:02:08.856539 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-17 01:02:08.856545 | orchestrator | Friday 17 April 2026 01:00:52 +0000 (0:00:03.718) 0:01:35.159 ********** 2026-04-17 01:02:08.856554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-17 01:02:08.856583 | orchestrator | 2026-04-17 01:02:08.856589 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-17 01:02:08.856595 | orchestrator | Friday 17 April 2026 01:00:59 +0000 (0:00:07.229) 0:01:42.388 ********** 2026-04-17 01:02:08.856601 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:02:08.856606 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:02:08.856612 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:02:08.856618 | orchestrator | 2026-04-17 01:02:08.856624 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-17 01:02:08.856630 | orchestrator | Friday 17 April 2026 01:00:59 +0000 (0:00:00.225) 0:01:42.613 ********** 2026-04-17 01:02:08.856635 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856641 | orchestrator | 2026-04-17 01:02:08.856647 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-17 01:02:08.856654 | orchestrator | Friday 17 April 2026 01:01:02 +0000 (0:00:02.710) 0:01:45.324 ********** 2026-04-17 01:02:08.856667 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856678 | orchestrator | 2026-04-17 01:02:08.856688 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-17 01:02:08.856697 | orchestrator | Friday 17 April 2026 01:01:05 +0000 (0:00:02.660) 0:01:47.985 ********** 2026-04-17 01:02:08.856705 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856715 | orchestrator | 2026-04-17 01:02:08.856724 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-17 01:02:08.856734 | orchestrator | Friday 17 April 2026 01:01:06 +0000 (0:00:01.859) 0:01:49.844 ********** 2026-04-17 01:02:08.856743 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856753 | orchestrator | 2026-04-17 01:02:08.856763 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-17 01:02:08.856774 | orchestrator | Friday 17 April 2026 01:01:35 +0000 (0:00:28.978) 0:02:18.823 ********** 2026-04-17 01:02:08.856784 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856795 | orchestrator | 2026-04-17 01:02:08.856810 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 01:02:08.856818 | orchestrator | Friday 17 April 2026 01:01:38 +0000 (0:00:02.361) 0:02:21.184 ********** 2026-04-17 01:02:08.856823 | orchestrator | 2026-04-17 01:02:08.856829 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 01:02:08.856835 | orchestrator | Friday 17 April 2026 01:01:38 +0000 (0:00:00.067) 0:02:21.251 ********** 2026-04-17 01:02:08.856841 | orchestrator | 2026-04-17 01:02:08.856847 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-17 01:02:08.856853 | orchestrator | Friday 17 April 2026 01:01:38 +0000 (0:00:00.059) 0:02:21.311 ********** 2026-04-17 01:02:08.856858 | orchestrator | 2026-04-17 01:02:08.856864 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-17 01:02:08.856875 | orchestrator | Friday 17 April 2026 01:01:38 +0000 (0:00:00.066) 0:02:21.378 ********** 2026-04-17 01:02:08.856881 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:02:08.856887 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:02:08.856893 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:02:08.856899 | orchestrator | 2026-04-17 01:02:08.856904 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:02:08.856911 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-17 01:02:08.856918 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-17 01:02:08.856924 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-17 01:02:08.856929 | orchestrator | 2026-04-17 01:02:08.856935 | orchestrator | 2026-04-17 01:02:08.856941 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:02:08.856947 | orchestrator | Friday 17 April 2026 01:02:06 +0000 (0:00:28.454) 0:02:49.832 ********** 2026-04-17 01:02:08.856953 | orchestrator | =============================================================================== 2026-04-17 01:02:08.856959 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.98s 2026-04-17 01:02:08.856964 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.45s 2026-04-17 01:02:08.856970 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.66s 2026-04-17 01:02:08.856976 | orchestrator | glance : Check glance containers ---------------------------------------- 7.23s 2026-04-17 01:02:08.856982 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.37s 2026-04-17 01:02:08.856988 | orchestrator | service-ks-register : glance | Creating users --------------------------- 5.01s 2026-04-17 01:02:08.856993 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.77s 2026-04-17 01:02:08.856999 | orchestrator | glance : Copying over config.json files for services -------------------- 4.72s 2026-04-17 01:02:08.857005 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.64s 2026-04-17 01:02:08.857050 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.64s 2026-04-17 01:02:08.857059 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.39s 2026-04-17 01:02:08.857065 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.02s 2026-04-17 01:02:08.857071 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.81s 2026-04-17 01:02:08.857080 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.78s 2026-04-17 01:02:08.857090 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.73s 2026-04-17 01:02:08.857100 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.72s 2026-04-17 01:02:08.857111 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.71s 2026-04-17 01:02:08.857122 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.67s 2026-04-17 01:02:08.857132 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.48s 2026-04-17 01:02:08.857143 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.30s 2026-04-17 01:02:08.857152 | orchestrator | 2026-04-17 01:02:08 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:08.857162 | orchestrator | 2026-04-17 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:11.900497 | orchestrator | 2026-04-17 01:02:11 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:11.902653 | orchestrator | 2026-04-17 01:02:11 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:11.904287 | orchestrator | 2026-04-17 01:02:11 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:11.906512 | orchestrator | 2026-04-17 01:02:11 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:11.906557 | orchestrator | 2026-04-17 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:14.952756 | orchestrator | 2026-04-17 01:02:14 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:14.954606 | orchestrator | 2026-04-17 01:02:14 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:14.956101 | orchestrator | 2026-04-17 01:02:14 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:14.957516 | orchestrator | 2026-04-17 01:02:14 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:14.957577 | orchestrator | 2026-04-17 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:18.002531 | orchestrator | 2026-04-17 01:02:18 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:18.002818 | orchestrator | 2026-04-17 01:02:18 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:18.003834 | orchestrator | 2026-04-17 01:02:18 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:18.004877 | orchestrator | 2026-04-17 01:02:18 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:18.004933 | orchestrator | 2026-04-17 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:21.057094 | orchestrator | 2026-04-17 01:02:21 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:21.060757 | orchestrator | 2026-04-17 01:02:21 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:21.062217 | orchestrator | 2026-04-17 01:02:21 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:21.064294 | orchestrator | 2026-04-17 01:02:21 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:21.064357 | orchestrator | 2026-04-17 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:24.107904 | orchestrator | 2026-04-17 01:02:24 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:24.111393 | orchestrator | 2026-04-17 01:02:24 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:24.113793 | orchestrator | 2026-04-17 01:02:24 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:24.115583 | orchestrator | 2026-04-17 01:02:24 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:24.115626 | orchestrator | 2026-04-17 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:27.160685 | orchestrator | 2026-04-17 01:02:27 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:27.162680 | orchestrator | 2026-04-17 01:02:27 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:27.165012 | orchestrator | 2026-04-17 01:02:27 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:27.167573 | orchestrator | 2026-04-17 01:02:27 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:27.167625 | orchestrator | 2026-04-17 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:30.216724 | orchestrator | 2026-04-17 01:02:30 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:30.217775 | orchestrator | 2026-04-17 01:02:30 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:30.219607 | orchestrator | 2026-04-17 01:02:30 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:30.220333 | orchestrator | 2026-04-17 01:02:30 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:30.220410 | orchestrator | 2026-04-17 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:33.256518 | orchestrator | 2026-04-17 01:02:33 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:33.258637 | orchestrator | 2026-04-17 01:02:33 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:33.260435 | orchestrator | 2026-04-17 01:02:33 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:33.262259 | orchestrator | 2026-04-17 01:02:33 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:33.262288 | orchestrator | 2026-04-17 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:36.301263 | orchestrator | 2026-04-17 01:02:36 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:36.302143 | orchestrator | 2026-04-17 01:02:36 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:36.303514 | orchestrator | 2026-04-17 01:02:36 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:36.304690 | orchestrator | 2026-04-17 01:02:36 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:36.304712 | orchestrator | 2026-04-17 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:39.441779 | orchestrator | 2026-04-17 01:02:39 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:39.442250 | orchestrator | 2026-04-17 01:02:39 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:39.443012 | orchestrator | 2026-04-17 01:02:39 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:39.444010 | orchestrator | 2026-04-17 01:02:39 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:39.444064 | orchestrator | 2026-04-17 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:42.477232 | orchestrator | 2026-04-17 01:02:42 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:42.477357 | orchestrator | 2026-04-17 01:02:42 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:42.480388 | orchestrator | 2026-04-17 01:02:42 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:42.480834 | orchestrator | 2026-04-17 01:02:42 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:42.480872 | orchestrator | 2026-04-17 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:45.519560 | orchestrator | 2026-04-17 01:02:45 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:45.520137 | orchestrator | 2026-04-17 01:02:45 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:45.521051 | orchestrator | 2026-04-17 01:02:45 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:45.522318 | orchestrator | 2026-04-17 01:02:45 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:45.522391 | orchestrator | 2026-04-17 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:48.545432 | orchestrator | 2026-04-17 01:02:48 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:48.545782 | orchestrator | 2026-04-17 01:02:48 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:48.546538 | orchestrator | 2026-04-17 01:02:48 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:48.547274 | orchestrator | 2026-04-17 01:02:48 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:48.547307 | orchestrator | 2026-04-17 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:51.604938 | orchestrator | 2026-04-17 01:02:51 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:51.605402 | orchestrator | 2026-04-17 01:02:51 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:51.606309 | orchestrator | 2026-04-17 01:02:51 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:51.609032 | orchestrator | 2026-04-17 01:02:51 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:51.609115 | orchestrator | 2026-04-17 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:54.628208 | orchestrator | 2026-04-17 01:02:54 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:54.628578 | orchestrator | 2026-04-17 01:02:54 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:54.629313 | orchestrator | 2026-04-17 01:02:54 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:54.629919 | orchestrator | 2026-04-17 01:02:54 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:54.630096 | orchestrator | 2026-04-17 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:02:57.653789 | orchestrator | 2026-04-17 01:02:57 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:02:57.655751 | orchestrator | 2026-04-17 01:02:57 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:02:57.657212 | orchestrator | 2026-04-17 01:02:57 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:02:57.658267 | orchestrator | 2026-04-17 01:02:57 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:02:57.658495 | orchestrator | 2026-04-17 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:00.700446 | orchestrator | 2026-04-17 01:03:00 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:00.700897 | orchestrator | 2026-04-17 01:03:00 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:00.701891 | orchestrator | 2026-04-17 01:03:00 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:00.702539 | orchestrator | 2026-04-17 01:03:00 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:03:00.702765 | orchestrator | 2026-04-17 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:03.731379 | orchestrator | 2026-04-17 01:03:03 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:03.731706 | orchestrator | 2026-04-17 01:03:03 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:03.732406 | orchestrator | 2026-04-17 01:03:03 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:03.733320 | orchestrator | 2026-04-17 01:03:03 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:03:03.733346 | orchestrator | 2026-04-17 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:06.770310 | orchestrator | 2026-04-17 01:03:06 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:06.770670 | orchestrator | 2026-04-17 01:03:06 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:06.771536 | orchestrator | 2026-04-17 01:03:06 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:06.772812 | orchestrator | 2026-04-17 01:03:06 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:03:06.772857 | orchestrator | 2026-04-17 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:09.796601 | orchestrator | 2026-04-17 01:03:09 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:09.798050 | orchestrator | 2026-04-17 01:03:09 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:09.798727 | orchestrator | 2026-04-17 01:03:09 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:09.799475 | orchestrator | 2026-04-17 01:03:09 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:03:09.799509 | orchestrator | 2026-04-17 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:12.827524 | orchestrator | 2026-04-17 01:03:12 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:12.828883 | orchestrator | 2026-04-17 01:03:12 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:12.830652 | orchestrator | 2026-04-17 01:03:12 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:12.832282 | orchestrator | 2026-04-17 01:03:12 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state STARTED 2026-04-17 01:03:12.832333 | orchestrator | 2026-04-17 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:15.859268 | orchestrator | 2026-04-17 01:03:15 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:15.859499 | orchestrator | 2026-04-17 01:03:15 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:15.860709 | orchestrator | 2026-04-17 01:03:15 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:15.861230 | orchestrator | 2026-04-17 01:03:15 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:15.863612 | orchestrator | 2026-04-17 01:03:15 | INFO  | Task 63937754-ee52-4e17-a980-03f26b42a899 is in state SUCCESS 2026-04-17 01:03:15.864762 | orchestrator | 2026-04-17 01:03:15.864807 | orchestrator | 2026-04-17 01:03:15.864817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:03:15.864825 | orchestrator | 2026-04-17 01:03:15.864831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:03:15.864838 | orchestrator | Friday 17 April 2026 00:59:30 +0000 (0:00:00.378) 0:00:00.378 ********** 2026-04-17 01:03:15.864844 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:03:15.864852 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:03:15.864858 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:03:15.864865 | orchestrator | 2026-04-17 01:03:15.864872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:03:15.865008 | orchestrator | Friday 17 April 2026 00:59:30 +0000 (0:00:00.348) 0:00:00.727 ********** 2026-04-17 01:03:15.865017 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-17 01:03:15.865024 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-17 01:03:15.865031 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-17 01:03:15.865038 | orchestrator | 2026-04-17 01:03:15.865045 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-17 01:03:15.865052 | orchestrator | 2026-04-17 01:03:15.865059 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 01:03:15.865066 | orchestrator | Friday 17 April 2026 00:59:31 +0000 (0:00:00.324) 0:00:01.051 ********** 2026-04-17 01:03:15.865073 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:03:15.865082 | orchestrator | 2026-04-17 01:03:15.865106 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-17 01:03:15.865113 | orchestrator | Friday 17 April 2026 00:59:31 +0000 (0:00:00.754) 0:00:01.805 ********** 2026-04-17 01:03:15.865121 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-17 01:03:15.865128 | orchestrator | 2026-04-17 01:03:15.865134 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-17 01:03:15.865141 | orchestrator | Friday 17 April 2026 00:59:36 +0000 (0:00:04.288) 0:00:06.094 ********** 2026-04-17 01:03:15.865148 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-17 01:03:15.865155 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-17 01:03:15.865162 | orchestrator | 2026-04-17 01:03:15.865168 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-17 01:03:15.865176 | orchestrator | Friday 17 April 2026 00:59:43 +0000 (0:00:07.406) 0:00:13.500 ********** 2026-04-17 01:03:15.865183 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:03:15.865190 | orchestrator | 2026-04-17 01:03:15.865197 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-17 01:03:15.865203 | orchestrator | Friday 17 April 2026 00:59:47 +0000 (0:00:03.880) 0:00:17.381 ********** 2026-04-17 01:03:15.865210 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-17 01:03:15.865217 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:03:15.865224 | orchestrator | 2026-04-17 01:03:15.865230 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-17 01:03:15.865237 | orchestrator | Friday 17 April 2026 00:59:51 +0000 (0:00:04.362) 0:00:21.743 ********** 2026-04-17 01:03:15.865244 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:03:15.865251 | orchestrator | 2026-04-17 01:03:15.865258 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-17 01:03:15.865644 | orchestrator | Friday 17 April 2026 00:59:55 +0000 (0:00:03.265) 0:00:25.008 ********** 2026-04-17 01:03:15.865653 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-17 01:03:15.865660 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-17 01:03:15.865667 | orchestrator | 2026-04-17 01:03:15.865674 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-17 01:03:15.865681 | orchestrator | Friday 17 April 2026 01:00:03 +0000 (0:00:08.277) 0:00:33.286 ********** 2026-04-17 01:03:15.865918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.866003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.866065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.866073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.866376 | orchestrator | 2026-04-17 01:03:15.866384 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 01:03:15.866392 | orchestrator | Friday 17 April 2026 01:00:06 +0000 (0:00:02.705) 0:00:35.991 ********** 2026-04-17 01:03:15.866399 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.866405 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.866411 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.866416 | orchestrator | 2026-04-17 01:03:15.866422 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 01:03:15.866441 | orchestrator | Friday 17 April 2026 01:00:06 +0000 (0:00:00.330) 0:00:36.322 ********** 2026-04-17 01:03:15.866448 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:03:15.866454 | orchestrator | 2026-04-17 01:03:15.866459 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-17 01:03:15.866490 | orchestrator | Friday 17 April 2026 01:00:06 +0000 (0:00:00.495) 0:00:36.817 ********** 2026-04-17 01:03:15.866498 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-17 01:03:15.866504 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-17 01:03:15.866510 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-17 01:03:15.866516 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-17 01:03:15.866523 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-17 01:03:15.866529 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-17 01:03:15.866536 | orchestrator | 2026-04-17 01:03:15.866543 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-17 01:03:15.866550 | orchestrator | Friday 17 April 2026 01:00:09 +0000 (0:00:02.347) 0:00:39.165 ********** 2026-04-17 01:03:15.866558 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.866566 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.866574 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.868233 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.868350 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.868361 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-17 01:03:15.868369 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868377 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868411 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868458 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868467 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868473 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-17 01:03:15.868480 | orchestrator | 2026-04-17 01:03:15.868486 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-17 01:03:15.868493 | orchestrator | Friday 17 April 2026 01:00:13 +0000 (0:00:03.953) 0:00:43.119 ********** 2026-04-17 01:03:15.868513 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:03:15.868521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:03:15.868526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-17 01:03:15.868532 | orchestrator | 2026-04-17 01:03:15.868538 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-17 01:03:15.868544 | orchestrator | Friday 17 April 2026 01:00:14 +0000 (0:00:01.773) 0:00:44.892 ********** 2026-04-17 01:03:15.868551 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-17 01:03:15.868556 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-17 01:03:15.868562 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-17 01:03:15.868569 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 01:03:15.868575 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 01:03:15.868580 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-17 01:03:15.868586 | orchestrator | 2026-04-17 01:03:15.868592 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-17 01:03:15.868598 | orchestrator | Friday 17 April 2026 01:00:18 +0000 (0:00:03.259) 0:00:48.152 ********** 2026-04-17 01:03:15.868604 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-17 01:03:15.868610 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-17 01:03:15.868616 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-17 01:03:15.868622 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-17 01:03:15.868628 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-17 01:03:15.868634 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-17 01:03:15.868640 | orchestrator | 2026-04-17 01:03:15.868646 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-17 01:03:15.868652 | orchestrator | Friday 17 April 2026 01:00:19 +0000 (0:00:01.159) 0:00:49.311 ********** 2026-04-17 01:03:15.868658 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.868664 | orchestrator | 2026-04-17 01:03:15.868670 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-17 01:03:15.868676 | orchestrator | Friday 17 April 2026 01:00:19 +0000 (0:00:00.116) 0:00:49.427 ********** 2026-04-17 01:03:15.868682 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.868693 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.868699 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.868705 | orchestrator | 2026-04-17 01:03:15.868710 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 01:03:15.868716 | orchestrator | Friday 17 April 2026 01:00:19 +0000 (0:00:00.384) 0:00:49.812 ********** 2026-04-17 01:03:15.868723 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:03:15.868757 | orchestrator | 2026-04-17 01:03:15.868764 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-17 01:03:15.868770 | orchestrator | Friday 17 April 2026 01:00:21 +0000 (0:00:01.226) 0:00:51.038 ********** 2026-04-17 01:03:15.868777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.868797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.868812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.868821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.868948 | orchestrator | 2026-04-17 01:03:15.868955 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-17 01:03:15.868961 | orchestrator | Friday 17 April 2026 01:00:25 +0000 (0:00:04.681) 0:00:55.720 ********** 2026-04-17 01:03:15.868967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.868974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.868980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.868986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869004 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.869016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869054 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.869061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869180 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.869187 | orchestrator | 2026-04-17 01:03:15.869193 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-17 01:03:15.869199 | orchestrator | Friday 17 April 2026 01:00:26 +0000 (0:00:00.932) 0:00:56.652 ********** 2026-04-17 01:03:15.869206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869260 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.869267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869292 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.869315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869347 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.869353 | orchestrator | 2026-04-17 01:03:15.869360 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-17 01:03:15.869367 | orchestrator | Friday 17 April 2026 01:00:28 +0000 (0:00:01.442) 0:00:58.094 ********** 2026-04-17 01:03:15.869374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869519 | orchestrator | 2026-04-17 01:03:15.869526 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-17 01:03:15.869540 | orchestrator | Friday 17 April 2026 01:00:33 +0000 (0:00:05.188) 0:01:03.283 ********** 2026-04-17 01:03:15.869547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 01:03:15.869554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 01:03:15.869561 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-17 01:03:15.869569 | orchestrator | 2026-04-17 01:03:15.869576 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-17 01:03:15.869582 | orchestrator | Friday 17 April 2026 01:00:35 +0000 (0:00:02.270) 0:01:05.553 ********** 2026-04-17 01:03:15.869609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.869630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.869743 | orchestrator | 2026-04-17 01:03:15.869748 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-17 01:03:15.869755 | orchestrator | Friday 17 April 2026 01:00:48 +0000 (0:00:13.229) 0:01:18.783 ********** 2026-04-17 01:03:15.869761 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.869768 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.869775 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.869781 | orchestrator | 2026-04-17 01:03:15.869788 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-17 01:03:15.869795 | orchestrator | Friday 17 April 2026 01:00:51 +0000 (0:00:02.191) 0:01:20.975 ********** 2026-04-17 01:03:15.869802 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.869807 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.869813 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.869822 | orchestrator | 2026-04-17 01:03:15.869828 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-17 01:03:15.869834 | orchestrator | Friday 17 April 2026 01:00:52 +0000 (0:00:01.913) 0:01:22.888 ********** 2026-04-17 01:03:15.869843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869888 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.869912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.869962 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.869969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-17 01:03:15.869989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.870001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.870007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-17 01:03:15.870051 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.870058 | orchestrator | 2026-04-17 01:03:15.870064 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-17 01:03:15.870071 | orchestrator | Friday 17 April 2026 01:00:55 +0000 (0:00:02.311) 0:01:25.200 ********** 2026-04-17 01:03:15.870111 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.870117 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.870123 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.870129 | orchestrator | 2026-04-17 01:03:15.870135 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-17 01:03:15.870141 | orchestrator | Friday 17 April 2026 01:00:56 +0000 (0:00:00.900) 0:01:26.101 ********** 2026-04-17 01:03:15.870148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.870155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.870181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-17 01:03:15.870189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-17 01:03:15.870358 | orchestrator | 2026-04-17 01:03:15.870364 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-17 01:03:15.870371 | orchestrator | Friday 17 April 2026 01:01:00 +0000 (0:00:04.197) 0:01:30.299 ********** 2026-04-17 01:03:15.870377 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.870384 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:03:15.870390 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:03:15.870397 | orchestrator | 2026-04-17 01:03:15.870403 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-17 01:03:15.870409 | orchestrator | Friday 17 April 2026 01:01:00 +0000 (0:00:00.235) 0:01:30.534 ********** 2026-04-17 01:03:15.870415 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870422 | orchestrator | 2026-04-17 01:03:15.870428 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-17 01:03:15.870435 | orchestrator | Friday 17 April 2026 01:01:03 +0000 (0:00:02.638) 0:01:33.172 ********** 2026-04-17 01:03:15.870443 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870449 | orchestrator | 2026-04-17 01:03:15.870456 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-17 01:03:15.870463 | orchestrator | Friday 17 April 2026 01:01:05 +0000 (0:00:02.243) 0:01:35.416 ********** 2026-04-17 01:03:15.870470 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870477 | orchestrator | 2026-04-17 01:03:15.870484 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 01:03:15.870490 | orchestrator | Friday 17 April 2026 01:01:25 +0000 (0:00:20.206) 0:01:55.623 ********** 2026-04-17 01:03:15.870497 | orchestrator | 2026-04-17 01:03:15.870504 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 01:03:15.870511 | orchestrator | Friday 17 April 2026 01:01:25 +0000 (0:00:00.066) 0:01:55.689 ********** 2026-04-17 01:03:15.870518 | orchestrator | 2026-04-17 01:03:15.870525 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-17 01:03:15.870532 | orchestrator | Friday 17 April 2026 01:01:25 +0000 (0:00:00.061) 0:01:55.751 ********** 2026-04-17 01:03:15.870539 | orchestrator | 2026-04-17 01:03:15.870546 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-17 01:03:15.870570 | orchestrator | Friday 17 April 2026 01:01:25 +0000 (0:00:00.062) 0:01:55.813 ********** 2026-04-17 01:03:15.870577 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870583 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.870589 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.870598 | orchestrator | 2026-04-17 01:03:15.870604 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-17 01:03:15.870633 | orchestrator | Friday 17 April 2026 01:02:17 +0000 (0:00:51.528) 0:02:47.341 ********** 2026-04-17 01:03:15.870639 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870646 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.870653 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.870660 | orchestrator | 2026-04-17 01:03:15.870667 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-17 01:03:15.870674 | orchestrator | Friday 17 April 2026 01:02:44 +0000 (0:00:27.171) 0:03:14.513 ********** 2026-04-17 01:03:15.870681 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870687 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.870694 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.870701 | orchestrator | 2026-04-17 01:03:15.870708 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-17 01:03:15.870715 | orchestrator | Friday 17 April 2026 01:03:05 +0000 (0:00:21.039) 0:03:35.552 ********** 2026-04-17 01:03:15.870722 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:03:15.870729 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:03:15.870736 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:03:15.870741 | orchestrator | 2026-04-17 01:03:15.870748 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-17 01:03:15.870755 | orchestrator | Friday 17 April 2026 01:03:13 +0000 (0:00:08.071) 0:03:43.624 ********** 2026-04-17 01:03:15.870761 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:03:15.870767 | orchestrator | 2026-04-17 01:03:15.870773 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:03:15.870781 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 01:03:15.870790 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:03:15.870797 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:03:15.870803 | orchestrator | 2026-04-17 01:03:15.870810 | orchestrator | 2026-04-17 01:03:15.870817 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:03:15.870823 | orchestrator | Friday 17 April 2026 01:03:13 +0000 (0:00:00.266) 0:03:43.890 ********** 2026-04-17 01:03:15.870830 | orchestrator | =============================================================================== 2026-04-17 01:03:15.870836 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 51.53s 2026-04-17 01:03:15.870843 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 27.17s 2026-04-17 01:03:15.870850 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.04s 2026-04-17 01:03:15.870857 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.21s 2026-04-17 01:03:15.870864 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.23s 2026-04-17 01:03:15.870870 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.28s 2026-04-17 01:03:15.870877 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.07s 2026-04-17 01:03:15.870884 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.41s 2026-04-17 01:03:15.870891 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.19s 2026-04-17 01:03:15.870898 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.68s 2026-04-17 01:03:15.870905 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.36s 2026-04-17 01:03:15.870912 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.29s 2026-04-17 01:03:15.870918 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.20s 2026-04-17 01:03:15.870925 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.95s 2026-04-17 01:03:15.870953 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.88s 2026-04-17 01:03:15.870961 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.27s 2026-04-17 01:03:15.870968 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.26s 2026-04-17 01:03:15.870975 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.71s 2026-04-17 01:03:15.870981 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.64s 2026-04-17 01:03:15.870988 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.35s 2026-04-17 01:03:15.870995 | orchestrator | 2026-04-17 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:18.894172 | orchestrator | 2026-04-17 01:03:18 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:18.894388 | orchestrator | 2026-04-17 01:03:18 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:18.894963 | orchestrator | 2026-04-17 01:03:18 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:18.895755 | orchestrator | 2026-04-17 01:03:18 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:18.895792 | orchestrator | 2026-04-17 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:21.921468 | orchestrator | 2026-04-17 01:03:21 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:21.921883 | orchestrator | 2026-04-17 01:03:21 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:21.922495 | orchestrator | 2026-04-17 01:03:21 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:21.923627 | orchestrator | 2026-04-17 01:03:21 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:21.923651 | orchestrator | 2026-04-17 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:24.959914 | orchestrator | 2026-04-17 01:03:24 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:24.960345 | orchestrator | 2026-04-17 01:03:24 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:24.962202 | orchestrator | 2026-04-17 01:03:24 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:24.962713 | orchestrator | 2026-04-17 01:03:24 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:24.962762 | orchestrator | 2026-04-17 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:27.994553 | orchestrator | 2026-04-17 01:03:27 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:27.994781 | orchestrator | 2026-04-17 01:03:27 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:27.995618 | orchestrator | 2026-04-17 01:03:27 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:27.996614 | orchestrator | 2026-04-17 01:03:27 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:27.996638 | orchestrator | 2026-04-17 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:31.023618 | orchestrator | 2026-04-17 01:03:31 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:31.023693 | orchestrator | 2026-04-17 01:03:31 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:31.024315 | orchestrator | 2026-04-17 01:03:31 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:31.024995 | orchestrator | 2026-04-17 01:03:31 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:31.025018 | orchestrator | 2026-04-17 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:34.059295 | orchestrator | 2026-04-17 01:03:34 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:34.059389 | orchestrator | 2026-04-17 01:03:34 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:34.059859 | orchestrator | 2026-04-17 01:03:34 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:34.061737 | orchestrator | 2026-04-17 01:03:34 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:34.061804 | orchestrator | 2026-04-17 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:37.108931 | orchestrator | 2026-04-17 01:03:37 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:37.109185 | orchestrator | 2026-04-17 01:03:37 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:37.109878 | orchestrator | 2026-04-17 01:03:37 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:37.110970 | orchestrator | 2026-04-17 01:03:37 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:37.111034 | orchestrator | 2026-04-17 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:40.236733 | orchestrator | 2026-04-17 01:03:40 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:40.237979 | orchestrator | 2026-04-17 01:03:40 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:40.238004 | orchestrator | 2026-04-17 01:03:40 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:40.238054 | orchestrator | 2026-04-17 01:03:40 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:40.238061 | orchestrator | 2026-04-17 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:43.263053 | orchestrator | 2026-04-17 01:03:43 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:43.263199 | orchestrator | 2026-04-17 01:03:43 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:43.264511 | orchestrator | 2026-04-17 01:03:43 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:43.264966 | orchestrator | 2026-04-17 01:03:43 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:43.265003 | orchestrator | 2026-04-17 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:46.289397 | orchestrator | 2026-04-17 01:03:46 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:46.289547 | orchestrator | 2026-04-17 01:03:46 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:46.289565 | orchestrator | 2026-04-17 01:03:46 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:46.289578 | orchestrator | 2026-04-17 01:03:46 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:46.289590 | orchestrator | 2026-04-17 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:49.311323 | orchestrator | 2026-04-17 01:03:49 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:49.312323 | orchestrator | 2026-04-17 01:03:49 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:49.312737 | orchestrator | 2026-04-17 01:03:49 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:49.313320 | orchestrator | 2026-04-17 01:03:49 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:49.313437 | orchestrator | 2026-04-17 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:52.342651 | orchestrator | 2026-04-17 01:03:52 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:52.343002 | orchestrator | 2026-04-17 01:03:52 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:52.343950 | orchestrator | 2026-04-17 01:03:52 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:52.344554 | orchestrator | 2026-04-17 01:03:52 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:52.346347 | orchestrator | 2026-04-17 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:55.370387 | orchestrator | 2026-04-17 01:03:55 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:55.370676 | orchestrator | 2026-04-17 01:03:55 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:55.371399 | orchestrator | 2026-04-17 01:03:55 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:55.373214 | orchestrator | 2026-04-17 01:03:55 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:55.373502 | orchestrator | 2026-04-17 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:03:58.400338 | orchestrator | 2026-04-17 01:03:58 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:03:58.400452 | orchestrator | 2026-04-17 01:03:58 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:03:58.400660 | orchestrator | 2026-04-17 01:03:58 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:03:58.401329 | orchestrator | 2026-04-17 01:03:58 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:03:58.401394 | orchestrator | 2026-04-17 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:01.426611 | orchestrator | 2026-04-17 01:04:01 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:01.426687 | orchestrator | 2026-04-17 01:04:01 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:01.427461 | orchestrator | 2026-04-17 01:04:01 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state STARTED 2026-04-17 01:04:01.428202 | orchestrator | 2026-04-17 01:04:01 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:01.428249 | orchestrator | 2026-04-17 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:04.454639 | orchestrator | 2026-04-17 01:04:04 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:04.454916 | orchestrator | 2026-04-17 01:04:04 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:04.455638 | orchestrator | 2026-04-17 01:04:04 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:04.456921 | orchestrator | 2026-04-17 01:04:04 | INFO  | Task d95271e8-6bcb-4b98-aa67-5654a1c45ce8 is in state SUCCESS 2026-04-17 01:04:04.458163 | orchestrator | 2026-04-17 01:04:04.458236 | orchestrator | 2026-04-17 01:04:04.458247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:04:04.458255 | orchestrator | 2026-04-17 01:04:04.458262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:04:04.458269 | orchestrator | Friday 17 April 2026 01:02:10 +0000 (0:00:00.316) 0:00:00.316 ********** 2026-04-17 01:04:04.458277 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:04:04.458285 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:04:04.458291 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:04:04.458298 | orchestrator | 2026-04-17 01:04:04.458306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:04:04.458311 | orchestrator | Friday 17 April 2026 01:02:10 +0000 (0:00:00.271) 0:00:00.588 ********** 2026-04-17 01:04:04.458315 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-17 01:04:04.458319 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-17 01:04:04.458323 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-17 01:04:04.458327 | orchestrator | 2026-04-17 01:04:04.458331 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-17 01:04:04.458335 | orchestrator | 2026-04-17 01:04:04.458339 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 01:04:04.458343 | orchestrator | Friday 17 April 2026 01:02:10 +0000 (0:00:00.281) 0:00:00.870 ********** 2026-04-17 01:04:04.458347 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:04:04.458352 | orchestrator | 2026-04-17 01:04:04.458356 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-17 01:04:04.458360 | orchestrator | Friday 17 April 2026 01:02:11 +0000 (0:00:00.593) 0:00:01.463 ********** 2026-04-17 01:04:04.458364 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-17 01:04:04.458368 | orchestrator | 2026-04-17 01:04:04.458372 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-17 01:04:04.458380 | orchestrator | Friday 17 April 2026 01:02:15 +0000 (0:00:04.037) 0:00:05.501 ********** 2026-04-17 01:04:04.458390 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-17 01:04:04.458397 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-17 01:04:04.458404 | orchestrator | 2026-04-17 01:04:04.458411 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-17 01:04:04.458417 | orchestrator | Friday 17 April 2026 01:02:22 +0000 (0:00:07.306) 0:00:12.807 ********** 2026-04-17 01:04:04.458425 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:04:04.458429 | orchestrator | 2026-04-17 01:04:04.458433 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-17 01:04:04.458437 | orchestrator | Friday 17 April 2026 01:02:25 +0000 (0:00:03.058) 0:00:15.865 ********** 2026-04-17 01:04:04.458441 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-17 01:04:04.458445 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:04:04.458448 | orchestrator | 2026-04-17 01:04:04.458474 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-17 01:04:04.458479 | orchestrator | Friday 17 April 2026 01:02:30 +0000 (0:00:04.212) 0:00:20.078 ********** 2026-04-17 01:04:04.458483 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:04:04.458487 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-17 01:04:04.458491 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-17 01:04:04.458495 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-17 01:04:04.458499 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-17 01:04:04.458503 | orchestrator | 2026-04-17 01:04:04.458507 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-17 01:04:04.458517 | orchestrator | Friday 17 April 2026 01:02:47 +0000 (0:00:16.911) 0:00:36.989 ********** 2026-04-17 01:04:04.458521 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-17 01:04:04.458525 | orchestrator | 2026-04-17 01:04:04.458529 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-17 01:04:04.458533 | orchestrator | Friday 17 April 2026 01:02:51 +0000 (0:00:04.139) 0:00:41.129 ********** 2026-04-17 01:04:04.458586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458751 | orchestrator | 2026-04-17 01:04:04.458755 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-17 01:04:04.458759 | orchestrator | Friday 17 April 2026 01:02:53 +0000 (0:00:02.263) 0:00:43.392 ********** 2026-04-17 01:04:04.458763 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-17 01:04:04.458767 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-17 01:04:04.458771 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-17 01:04:04.458775 | orchestrator | 2026-04-17 01:04:04.458779 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-17 01:04:04.458783 | orchestrator | Friday 17 April 2026 01:02:55 +0000 (0:00:01.634) 0:00:45.026 ********** 2026-04-17 01:04:04.458787 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.458791 | orchestrator | 2026-04-17 01:04:04.458795 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-17 01:04:04.458802 | orchestrator | Friday 17 April 2026 01:02:55 +0000 (0:00:00.141) 0:00:45.168 ********** 2026-04-17 01:04:04.458806 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.458810 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.458814 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.458818 | orchestrator | 2026-04-17 01:04:04.458822 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 01:04:04.458826 | orchestrator | Friday 17 April 2026 01:02:55 +0000 (0:00:00.274) 0:00:45.443 ********** 2026-04-17 01:04:04.458830 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:04:04.458834 | orchestrator | 2026-04-17 01:04:04.458838 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-17 01:04:04.458842 | orchestrator | Friday 17 April 2026 01:02:56 +0000 (0:00:00.947) 0:00:46.391 ********** 2026-04-17 01:04:04.458846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.458866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.458899 | orchestrator | 2026-04-17 01:04:04.458903 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-17 01:04:04.458907 | orchestrator | Friday 17 April 2026 01:03:00 +0000 (0:00:03.840) 0:00:50.231 ********** 2026-04-17 01:04:04.458911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.458919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458928 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.458940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.458945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458956 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.458960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.458964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.458974 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.458978 | orchestrator | 2026-04-17 01:04:04.458982 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-17 01:04:04.458986 | orchestrator | Friday 17 April 2026 01:03:01 +0000 (0:00:00.770) 0:00:51.002 ********** 2026-04-17 01:04:04.458993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.458997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.459004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459017 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.459022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459027 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.459034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.459041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459049 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.459053 | orchestrator | 2026-04-17 01:04:04.459057 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-17 01:04:04.459061 | orchestrator | Friday 17 April 2026 01:03:02 +0000 (0:00:01.348) 0:00:52.351 ********** 2026-04-17 01:04:04.459067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459168 | orchestrator | 2026-04-17 01:04:04.459173 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-17 01:04:04.459180 | orchestrator | Friday 17 April 2026 01:03:05 +0000 (0:00:03.330) 0:00:55.681 ********** 2026-04-17 01:04:04.459187 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459193 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:04:04.459200 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:04:04.459207 | orchestrator | 2026-04-17 01:04:04.459213 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-17 01:04:04.459220 | orchestrator | Friday 17 April 2026 01:03:08 +0000 (0:00:02.311) 0:00:57.993 ********** 2026-04-17 01:04:04.459226 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:04:04.459229 | orchestrator | 2026-04-17 01:04:04.459233 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-17 01:04:04.459237 | orchestrator | Friday 17 April 2026 01:03:09 +0000 (0:00:01.463) 0:00:59.456 ********** 2026-04-17 01:04:04.459241 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.459245 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.459249 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.459253 | orchestrator | 2026-04-17 01:04:04.459257 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-17 01:04:04.459261 | orchestrator | Friday 17 April 2026 01:03:10 +0000 (0:00:00.506) 0:00:59.963 ********** 2026-04-17 01:04:04.459265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459321 | orchestrator | 2026-04-17 01:04:04.459328 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-17 01:04:04.459335 | orchestrator | Friday 17 April 2026 01:03:18 +0000 (0:00:08.106) 0:01:08.070 ********** 2026-04-17 01:04:04.459346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.459353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459368 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.459375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.459385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459408 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.459416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-17 01:04:04.459423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:04:04.459438 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.459445 | orchestrator | 2026-04-17 01:04:04.459452 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-17 01:04:04.459460 | orchestrator | Friday 17 April 2026 01:03:18 +0000 (0:00:00.468) 0:01:08.539 ********** 2026-04-17 01:04:04.459470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-17 01:04:04.459493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:04:04.459531 | orchestrator | 2026-04-17 01:04:04.459535 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-17 01:04:04.459540 | orchestrator | Friday 17 April 2026 01:03:21 +0000 (0:00:02.853) 0:01:11.393 ********** 2026-04-17 01:04:04.459545 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:04:04.459549 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:04:04.459554 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:04:04.459558 | orchestrator | 2026-04-17 01:04:04.459563 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-17 01:04:04.459568 | orchestrator | Friday 17 April 2026 01:03:21 +0000 (0:00:00.395) 0:01:11.789 ********** 2026-04-17 01:04:04.459573 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459577 | orchestrator | 2026-04-17 01:04:04.459582 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-17 01:04:04.459587 | orchestrator | Friday 17 April 2026 01:03:24 +0000 (0:00:02.424) 0:01:14.214 ********** 2026-04-17 01:04:04.459591 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459596 | orchestrator | 2026-04-17 01:04:04.459600 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-17 01:04:04.459605 | orchestrator | Friday 17 April 2026 01:03:26 +0000 (0:00:02.614) 0:01:16.828 ********** 2026-04-17 01:04:04.459610 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459614 | orchestrator | 2026-04-17 01:04:04.459619 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 01:04:04.459623 | orchestrator | Friday 17 April 2026 01:03:39 +0000 (0:00:12.321) 0:01:29.150 ********** 2026-04-17 01:04:04.459628 | orchestrator | 2026-04-17 01:04:04.459636 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 01:04:04.459640 | orchestrator | Friday 17 April 2026 01:03:39 +0000 (0:00:00.399) 0:01:29.550 ********** 2026-04-17 01:04:04.459645 | orchestrator | 2026-04-17 01:04:04.459650 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-17 01:04:04.459654 | orchestrator | Friday 17 April 2026 01:03:39 +0000 (0:00:00.177) 0:01:29.727 ********** 2026-04-17 01:04:04.459659 | orchestrator | 2026-04-17 01:04:04.459664 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-17 01:04:04.459668 | orchestrator | Friday 17 April 2026 01:03:40 +0000 (0:00:00.179) 0:01:29.907 ********** 2026-04-17 01:04:04.459673 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459678 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:04:04.459682 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:04:04.459687 | orchestrator | 2026-04-17 01:04:04.459692 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-17 01:04:04.459696 | orchestrator | Friday 17 April 2026 01:03:46 +0000 (0:00:06.959) 0:01:36.866 ********** 2026-04-17 01:04:04.459700 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459704 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:04:04.459708 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:04:04.459712 | orchestrator | 2026-04-17 01:04:04.459716 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-17 01:04:04.459720 | orchestrator | Friday 17 April 2026 01:03:53 +0000 (0:00:06.719) 0:01:43.586 ********** 2026-04-17 01:04:04.459724 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:04:04.459728 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:04:04.459732 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:04:04.459736 | orchestrator | 2026-04-17 01:04:04.459740 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:04:04.459744 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:04:04.459751 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:04:04.459756 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:04:04.459760 | orchestrator | 2026-04-17 01:04:04.459763 | orchestrator | 2026-04-17 01:04:04.459767 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:04:04.459771 | orchestrator | Friday 17 April 2026 01:04:01 +0000 (0:00:07.456) 0:01:51.042 ********** 2026-04-17 01:04:04.459775 | orchestrator | =============================================================================== 2026-04-17 01:04:04.459779 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.91s 2026-04-17 01:04:04.459785 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.32s 2026-04-17 01:04:04.459789 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.11s 2026-04-17 01:04:04.459793 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.46s 2026-04-17 01:04:04.459797 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.31s 2026-04-17 01:04:04.459801 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.96s 2026-04-17 01:04:04.459805 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.72s 2026-04-17 01:04:04.459809 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.21s 2026-04-17 01:04:04.459813 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.14s 2026-04-17 01:04:04.459817 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.04s 2026-04-17 01:04:04.459821 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.84s 2026-04-17 01:04:04.459828 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.33s 2026-04-17 01:04:04.459832 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.06s 2026-04-17 01:04:04.459836 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.85s 2026-04-17 01:04:04.459839 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.61s 2026-04-17 01:04:04.459843 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.43s 2026-04-17 01:04:04.459847 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.31s 2026-04-17 01:04:04.459851 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.26s 2026-04-17 01:04:04.459855 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.63s 2026-04-17 01:04:04.459859 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.46s 2026-04-17 01:04:04.459863 | orchestrator | 2026-04-17 01:04:04 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:04.459867 | orchestrator | 2026-04-17 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:07.503722 | orchestrator | 2026-04-17 01:04:07 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:07.505895 | orchestrator | 2026-04-17 01:04:07 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:07.507363 | orchestrator | 2026-04-17 01:04:07 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:07.508912 | orchestrator | 2026-04-17 01:04:07 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:07.508967 | orchestrator | 2026-04-17 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:10.545363 | orchestrator | 2026-04-17 01:04:10 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:10.546294 | orchestrator | 2026-04-17 01:04:10 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:10.547950 | orchestrator | 2026-04-17 01:04:10 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:10.550970 | orchestrator | 2026-04-17 01:04:10 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:10.551037 | orchestrator | 2026-04-17 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:13.586337 | orchestrator | 2026-04-17 01:04:13 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:13.586874 | orchestrator | 2026-04-17 01:04:13 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:13.587716 | orchestrator | 2026-04-17 01:04:13 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:13.588600 | orchestrator | 2026-04-17 01:04:13 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:13.588631 | orchestrator | 2026-04-17 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:16.627922 | orchestrator | 2026-04-17 01:04:16 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:16.629581 | orchestrator | 2026-04-17 01:04:16 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:16.631113 | orchestrator | 2026-04-17 01:04:16 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:16.632804 | orchestrator | 2026-04-17 01:04:16 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:16.632855 | orchestrator | 2026-04-17 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:19.681379 | orchestrator | 2026-04-17 01:04:19 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:19.682254 | orchestrator | 2026-04-17 01:04:19 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:19.683547 | orchestrator | 2026-04-17 01:04:19 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:19.686169 | orchestrator | 2026-04-17 01:04:19 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:19.686237 | orchestrator | 2026-04-17 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:22.727429 | orchestrator | 2026-04-17 01:04:22 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:22.728695 | orchestrator | 2026-04-17 01:04:22 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:22.729381 | orchestrator | 2026-04-17 01:04:22 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:22.730791 | orchestrator | 2026-04-17 01:04:22 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:22.731027 | orchestrator | 2026-04-17 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:25.779448 | orchestrator | 2026-04-17 01:04:25 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:25.782369 | orchestrator | 2026-04-17 01:04:25 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:25.785427 | orchestrator | 2026-04-17 01:04:25 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:25.787786 | orchestrator | 2026-04-17 01:04:25 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:25.787871 | orchestrator | 2026-04-17 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:28.823137 | orchestrator | 2026-04-17 01:04:28 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:28.824540 | orchestrator | 2026-04-17 01:04:28 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:28.825347 | orchestrator | 2026-04-17 01:04:28 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:28.826371 | orchestrator | 2026-04-17 01:04:28 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:28.826409 | orchestrator | 2026-04-17 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:31.864499 | orchestrator | 2026-04-17 01:04:31 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:31.865843 | orchestrator | 2026-04-17 01:04:31 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:31.867135 | orchestrator | 2026-04-17 01:04:31 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:31.868273 | orchestrator | 2026-04-17 01:04:31 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:31.868590 | orchestrator | 2026-04-17 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:34.911184 | orchestrator | 2026-04-17 01:04:34 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:34.911587 | orchestrator | 2026-04-17 01:04:34 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:34.912714 | orchestrator | 2026-04-17 01:04:34 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:34.913142 | orchestrator | 2026-04-17 01:04:34 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:34.913439 | orchestrator | 2026-04-17 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:37.952563 | orchestrator | 2026-04-17 01:04:37 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:37.955430 | orchestrator | 2026-04-17 01:04:37 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:37.956122 | orchestrator | 2026-04-17 01:04:37 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:37.957922 | orchestrator | 2026-04-17 01:04:37 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:37.957953 | orchestrator | 2026-04-17 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:40.996631 | orchestrator | 2026-04-17 01:04:40 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:40.998817 | orchestrator | 2026-04-17 01:04:41 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:40.999854 | orchestrator | 2026-04-17 01:04:41 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:41.001258 | orchestrator | 2026-04-17 01:04:41 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:41.001815 | orchestrator | 2026-04-17 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:44.051682 | orchestrator | 2026-04-17 01:04:44 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:44.053930 | orchestrator | 2026-04-17 01:04:44 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:44.055875 | orchestrator | 2026-04-17 01:04:44 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:44.057655 | orchestrator | 2026-04-17 01:04:44 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:44.057697 | orchestrator | 2026-04-17 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:47.102388 | orchestrator | 2026-04-17 01:04:47 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:47.104938 | orchestrator | 2026-04-17 01:04:47 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:47.106008 | orchestrator | 2026-04-17 01:04:47 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:47.108395 | orchestrator | 2026-04-17 01:04:47 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:47.108439 | orchestrator | 2026-04-17 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:50.141566 | orchestrator | 2026-04-17 01:04:50 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:50.142766 | orchestrator | 2026-04-17 01:04:50 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:50.143629 | orchestrator | 2026-04-17 01:04:50 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:50.144370 | orchestrator | 2026-04-17 01:04:50 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:50.144408 | orchestrator | 2026-04-17 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:53.180317 | orchestrator | 2026-04-17 01:04:53 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:53.180458 | orchestrator | 2026-04-17 01:04:53 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:53.181375 | orchestrator | 2026-04-17 01:04:53 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:53.181947 | orchestrator | 2026-04-17 01:04:53 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:53.181985 | orchestrator | 2026-04-17 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:56.213527 | orchestrator | 2026-04-17 01:04:56 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:56.214540 | orchestrator | 2026-04-17 01:04:56 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:56.217156 | orchestrator | 2026-04-17 01:04:56 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:56.218699 | orchestrator | 2026-04-17 01:04:56 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:56.218748 | orchestrator | 2026-04-17 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:04:59.260362 | orchestrator | 2026-04-17 01:04:59 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:04:59.262805 | orchestrator | 2026-04-17 01:04:59 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:04:59.263225 | orchestrator | 2026-04-17 01:04:59 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:04:59.263814 | orchestrator | 2026-04-17 01:04:59 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:04:59.263840 | orchestrator | 2026-04-17 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:02.303452 | orchestrator | 2026-04-17 01:05:02 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:02.309776 | orchestrator | 2026-04-17 01:05:02 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:02.309859 | orchestrator | 2026-04-17 01:05:02 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:02.309876 | orchestrator | 2026-04-17 01:05:02 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:02.310503 | orchestrator | 2026-04-17 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:05.367324 | orchestrator | 2026-04-17 01:05:05 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:05.370100 | orchestrator | 2026-04-17 01:05:05 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:05.372642 | orchestrator | 2026-04-17 01:05:05 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:05.373567 | orchestrator | 2026-04-17 01:05:05 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:05.373684 | orchestrator | 2026-04-17 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:08.412398 | orchestrator | 2026-04-17 01:05:08 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:08.412618 | orchestrator | 2026-04-17 01:05:08 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:08.414389 | orchestrator | 2026-04-17 01:05:08 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:08.415310 | orchestrator | 2026-04-17 01:05:08 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:08.415890 | orchestrator | 2026-04-17 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:11.451588 | orchestrator | 2026-04-17 01:05:11 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:11.451805 | orchestrator | 2026-04-17 01:05:11 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:11.452583 | orchestrator | 2026-04-17 01:05:11 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:11.453623 | orchestrator | 2026-04-17 01:05:11 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:11.453668 | orchestrator | 2026-04-17 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:14.487503 | orchestrator | 2026-04-17 01:05:14 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:14.487573 | orchestrator | 2026-04-17 01:05:14 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:14.487579 | orchestrator | 2026-04-17 01:05:14 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:14.487584 | orchestrator | 2026-04-17 01:05:14 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:14.487589 | orchestrator | 2026-04-17 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:17.524311 | orchestrator | 2026-04-17 01:05:17 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:17.524448 | orchestrator | 2026-04-17 01:05:17 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:17.525321 | orchestrator | 2026-04-17 01:05:17 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:17.526000 | orchestrator | 2026-04-17 01:05:17 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:17.526100 | orchestrator | 2026-04-17 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:20.550656 | orchestrator | 2026-04-17 01:05:20 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:20.551995 | orchestrator | 2026-04-17 01:05:20 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:20.553819 | orchestrator | 2026-04-17 01:05:20 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state STARTED 2026-04-17 01:05:20.555378 | orchestrator | 2026-04-17 01:05:20 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:20.555432 | orchestrator | 2026-04-17 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:23.592586 | orchestrator | 2026-04-17 01:05:23 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:23.594516 | orchestrator | 2026-04-17 01:05:23 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:23.595476 | orchestrator | 2026-04-17 01:05:23 | INFO  | Task df1f536b-1c8e-4da7-a919-4116d25b4297 is in state SUCCESS 2026-04-17 01:05:23.596332 | orchestrator | 2026-04-17 01:05:23 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:23.597007 | orchestrator | 2026-04-17 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:26.623707 | orchestrator | 2026-04-17 01:05:26 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:26.623777 | orchestrator | 2026-04-17 01:05:26 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:26.624448 | orchestrator | 2026-04-17 01:05:26 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:26.625465 | orchestrator | 2026-04-17 01:05:26 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:26.625523 | orchestrator | 2026-04-17 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:29.661973 | orchestrator | 2026-04-17 01:05:29 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:29.662635 | orchestrator | 2026-04-17 01:05:29 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:29.663541 | orchestrator | 2026-04-17 01:05:29 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:29.664269 | orchestrator | 2026-04-17 01:05:29 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:29.664315 | orchestrator | 2026-04-17 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:32.696789 | orchestrator | 2026-04-17 01:05:32 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:32.697002 | orchestrator | 2026-04-17 01:05:32 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:32.697637 | orchestrator | 2026-04-17 01:05:32 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:32.698430 | orchestrator | 2026-04-17 01:05:32 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:32.698484 | orchestrator | 2026-04-17 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:35.734271 | orchestrator | 2026-04-17 01:05:35 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:35.734583 | orchestrator | 2026-04-17 01:05:35 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:35.734923 | orchestrator | 2026-04-17 01:05:35 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:35.735706 | orchestrator | 2026-04-17 01:05:35 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:35.735748 | orchestrator | 2026-04-17 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:38.774371 | orchestrator | 2026-04-17 01:05:38 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:38.774806 | orchestrator | 2026-04-17 01:05:38 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:38.775588 | orchestrator | 2026-04-17 01:05:38 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:38.777018 | orchestrator | 2026-04-17 01:05:38 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:38.777061 | orchestrator | 2026-04-17 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:41.806349 | orchestrator | 2026-04-17 01:05:41 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:41.812447 | orchestrator | 2026-04-17 01:05:41 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:41.813801 | orchestrator | 2026-04-17 01:05:41 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:41.815607 | orchestrator | 2026-04-17 01:05:41 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:41.815678 | orchestrator | 2026-04-17 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:44.851542 | orchestrator | 2026-04-17 01:05:44 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:44.854359 | orchestrator | 2026-04-17 01:05:44 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:44.854417 | orchestrator | 2026-04-17 01:05:44 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:44.854422 | orchestrator | 2026-04-17 01:05:44 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:44.854427 | orchestrator | 2026-04-17 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:47.888804 | orchestrator | 2026-04-17 01:05:47 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:47.889267 | orchestrator | 2026-04-17 01:05:47 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:47.890457 | orchestrator | 2026-04-17 01:05:47 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:47.891292 | orchestrator | 2026-04-17 01:05:47 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:47.892583 | orchestrator | 2026-04-17 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:50.927789 | orchestrator | 2026-04-17 01:05:50 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:50.930297 | orchestrator | 2026-04-17 01:05:50 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:50.931982 | orchestrator | 2026-04-17 01:05:50 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:50.933630 | orchestrator | 2026-04-17 01:05:50 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:50.933682 | orchestrator | 2026-04-17 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:53.985752 | orchestrator | 2026-04-17 01:05:53 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:53.986034 | orchestrator | 2026-04-17 01:05:53 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:53.986909 | orchestrator | 2026-04-17 01:05:53 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:53.987486 | orchestrator | 2026-04-17 01:05:53 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:53.987562 | orchestrator | 2026-04-17 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:05:57.016072 | orchestrator | 2026-04-17 01:05:57 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:05:57.016578 | orchestrator | 2026-04-17 01:05:57 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:05:57.017341 | orchestrator | 2026-04-17 01:05:57 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:05:57.017969 | orchestrator | 2026-04-17 01:05:57 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:05:57.018112 | orchestrator | 2026-04-17 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:00.048555 | orchestrator | 2026-04-17 01:06:00 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:06:00.049347 | orchestrator | 2026-04-17 01:06:00 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:06:00.050787 | orchestrator | 2026-04-17 01:06:00 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:00.051936 | orchestrator | 2026-04-17 01:06:00 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:00.052225 | orchestrator | 2026-04-17 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:03.078299 | orchestrator | 2026-04-17 01:06:03 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:06:03.078679 | orchestrator | 2026-04-17 01:06:03 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:06:03.079379 | orchestrator | 2026-04-17 01:06:03 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:03.080077 | orchestrator | 2026-04-17 01:06:03 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:03.080159 | orchestrator | 2026-04-17 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:06.109754 | orchestrator | 2026-04-17 01:06:06 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state STARTED 2026-04-17 01:06:06.110793 | orchestrator | 2026-04-17 01:06:06 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state STARTED 2026-04-17 01:06:06.111818 | orchestrator | 2026-04-17 01:06:06 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:06.113038 | orchestrator | 2026-04-17 01:06:06 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:06.113067 | orchestrator | 2026-04-17 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:09.154694 | orchestrator | 2026-04-17 01:06:09 | INFO  | Task fbed3227-584f-4e41-97d4-1642db701a62 is in state SUCCESS 2026-04-17 01:06:09.156369 | orchestrator | 2026-04-17 01:06:09.156434 | orchestrator | 2026-04-17 01:06:09.156441 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-17 01:06:09.156446 | orchestrator | 2026-04-17 01:06:09.156450 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-17 01:06:09.156454 | orchestrator | Friday 17 April 2026 01:04:04 +0000 (0:00:00.113) 0:00:00.113 ********** 2026-04-17 01:06:09.156458 | orchestrator | changed: [localhost] 2026-04-17 01:06:09.156463 | orchestrator | 2026-04-17 01:06:09.156467 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-17 01:06:09.156517 | orchestrator | Friday 17 April 2026 01:04:05 +0000 (0:00:00.756) 0:00:00.870 ********** 2026-04-17 01:06:09.156522 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-17 01:06:09.156526 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-04-17 01:06:09.156530 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-04-17 01:06:09.156535 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs"} 2026-04-17 01:06:09.156539 | orchestrator | 2026-04-17 01:06:09.156543 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:06:09.156548 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-17 01:06:09.156552 | orchestrator | 2026-04-17 01:06:09.156556 | orchestrator | 2026-04-17 01:06:09.156560 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:06:09.156563 | orchestrator | Friday 17 April 2026 01:05:22 +0000 (0:01:17.244) 0:01:18.114 ********** 2026-04-17 01:06:09.156567 | orchestrator | =============================================================================== 2026-04-17 01:06:09.156571 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 77.24s 2026-04-17 01:06:09.156575 | orchestrator | Ensure the destination directory exists --------------------------------- 0.76s 2026-04-17 01:06:09.156579 | orchestrator | 2026-04-17 01:06:09.156582 | orchestrator | 2026-04-17 01:06:09.156586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:06:09.156603 | orchestrator | 2026-04-17 01:06:09.156607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:06:09.156611 | orchestrator | Friday 17 April 2026 01:03:19 +0000 (0:00:00.457) 0:00:00.457 ********** 2026-04-17 01:06:09.156624 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:09.156628 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:09.156632 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:09.156636 | orchestrator | 2026-04-17 01:06:09.156653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:06:09.156657 | orchestrator | Friday 17 April 2026 01:03:19 +0000 (0:00:00.442) 0:00:00.899 ********** 2026-04-17 01:06:09.156662 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-17 01:06:09.156665 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-17 01:06:09.156669 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-17 01:06:09.156673 | orchestrator | 2026-04-17 01:06:09.156677 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-17 01:06:09.156680 | orchestrator | 2026-04-17 01:06:09.156684 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 01:06:09.156688 | orchestrator | Friday 17 April 2026 01:03:20 +0000 (0:00:00.472) 0:00:01.372 ********** 2026-04-17 01:06:09.156692 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:06:09.156696 | orchestrator | 2026-04-17 01:06:09.156700 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-17 01:06:09.156706 | orchestrator | Friday 17 April 2026 01:03:20 +0000 (0:00:00.589) 0:00:01.962 ********** 2026-04-17 01:06:09.156712 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-17 01:06:09.156719 | orchestrator | 2026-04-17 01:06:09.156726 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-17 01:06:09.156733 | orchestrator | Friday 17 April 2026 01:03:24 +0000 (0:00:04.145) 0:00:06.107 ********** 2026-04-17 01:06:09.156739 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-17 01:06:09.156753 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-17 01:06:09.156759 | orchestrator | 2026-04-17 01:06:09.156765 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-17 01:06:09.156772 | orchestrator | Friday 17 April 2026 01:03:32 +0000 (0:00:07.384) 0:00:13.495 ********** 2026-04-17 01:06:09.156779 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:06:09.156786 | orchestrator | 2026-04-17 01:06:09.156793 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-17 01:06:09.156800 | orchestrator | Friday 17 April 2026 01:03:36 +0000 (0:00:03.770) 0:00:17.265 ********** 2026-04-17 01:06:09.156807 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-17 01:06:09.156813 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:06:09.156817 | orchestrator | 2026-04-17 01:06:09.156821 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-17 01:06:09.156825 | orchestrator | Friday 17 April 2026 01:03:40 +0000 (0:00:04.266) 0:00:21.532 ********** 2026-04-17 01:06:09.156828 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:06:09.156832 | orchestrator | 2026-04-17 01:06:09.156846 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-17 01:06:09.156853 | orchestrator | Friday 17 April 2026 01:03:43 +0000 (0:00:03.539) 0:00:25.071 ********** 2026-04-17 01:06:09.156859 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-17 01:06:09.156865 | orchestrator | 2026-04-17 01:06:09.156870 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-17 01:06:09.156876 | orchestrator | Friday 17 April 2026 01:03:48 +0000 (0:00:04.276) 0:00:29.348 ********** 2026-04-17 01:06:09.156890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.156900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.156907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.156931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.156982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157077 | orchestrator | 2026-04-17 01:06:09.157083 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-17 01:06:09.157102 | orchestrator | Friday 17 April 2026 01:03:53 +0000 (0:00:05.117) 0:00:34.466 ********** 2026-04-17 01:06:09.157109 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.157116 | orchestrator | 2026-04-17 01:06:09.157126 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-17 01:06:09.157133 | orchestrator | Friday 17 April 2026 01:03:53 +0000 (0:00:00.219) 0:00:34.685 ********** 2026-04-17 01:06:09.157141 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.157147 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.157154 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.157161 | orchestrator | 2026-04-17 01:06:09.157168 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 01:06:09.157175 | orchestrator | Friday 17 April 2026 01:03:54 +0000 (0:00:00.788) 0:00:35.474 ********** 2026-04-17 01:06:09.157183 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:06:09.157190 | orchestrator | 2026-04-17 01:06:09.157213 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-17 01:06:09.157222 | orchestrator | Friday 17 April 2026 01:03:55 +0000 (0:00:01.186) 0:00:36.660 ********** 2026-04-17 01:06:09.157254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.157262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.157268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.157322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.157487 | orchestrator | 2026-04-17 01:06:09.157494 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-17 01:06:09.157501 | orchestrator | Friday 17 April 2026 01:04:03 +0000 (0:00:08.045) 0:00:44.706 ********** 2026-04-17 01:06:09.157508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.157516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.157523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157882 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.157898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.157903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.157907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157932 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.157940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.157944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.157948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157969 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.157972 | orchestrator | 2026-04-17 01:06:09.157976 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-17 01:06:09.157980 | orchestrator | Friday 17 April 2026 01:04:04 +0000 (0:00:00.730) 0:00:45.436 ********** 2026-04-17 01:06:09.157986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.157991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.157995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.157999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158043 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.158052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.158060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.158090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158098 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.158102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158122 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.158128 | orchestrator | 2026-04-17 01:06:09.158137 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-17 01:06:09.158157 | orchestrator | Friday 17 April 2026 01:04:05 +0000 (0:00:00.883) 0:00:46.320 ********** 2026-04-17 01:06:09.158168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158303 | orchestrator | 2026-04-17 01:06:09.158307 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-17 01:06:09.158312 | orchestrator | Friday 17 April 2026 01:04:11 +0000 (0:00:06.744) 0:00:53.064 ********** 2026-04-17 01:06:09.158320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158442 | orchestrator | 2026-04-17 01:06:09.158446 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-17 01:06:09.158449 | orchestrator | Friday 17 April 2026 01:04:25 +0000 (0:00:14.061) 0:01:07.125 ********** 2026-04-17 01:06:09.158453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 01:06:09.158458 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 01:06:09.158461 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-17 01:06:09.158465 | orchestrator | 2026-04-17 01:06:09.158469 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-17 01:06:09.158473 | orchestrator | Friday 17 April 2026 01:04:29 +0000 (0:00:04.005) 0:01:11.130 ********** 2026-04-17 01:06:09.158476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 01:06:09.158480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 01:06:09.158486 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-17 01:06:09.158490 | orchestrator | 2026-04-17 01:06:09.158493 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-17 01:06:09.158497 | orchestrator | Friday 17 April 2026 01:04:33 +0000 (0:00:03.269) 0:01:14.399 ********** 2026-04-17 01:06:09.158503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158608 | orchestrator | 2026-04-17 01:06:09.158612 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-17 01:06:09.158616 | orchestrator | Friday 17 April 2026 01:04:36 +0000 (0:00:02.881) 0:01:17.280 ********** 2026-04-17 01:06:09.158624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158751 | orchestrator | 2026-04-17 01:06:09.158755 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 01:06:09.158759 | orchestrator | Friday 17 April 2026 01:04:38 +0000 (0:00:02.572) 0:01:19.853 ********** 2026-04-17 01:06:09.158764 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.158769 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.158773 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.158781 | orchestrator | 2026-04-17 01:06:09.158785 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-17 01:06:09.158788 | orchestrator | Friday 17 April 2026 01:04:39 +0000 (0:00:00.586) 0:01:20.440 ********** 2026-04-17 01:06:09.158794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.158805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158870 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.158876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.158881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158901 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.158907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-17 01:06:09.158911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-17 01:06:09.158917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:06:09.158937 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.158941 | orchestrator | 2026-04-17 01:06:09.158944 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-17 01:06:09.158948 | orchestrator | Friday 17 April 2026 01:04:40 +0000 (0:00:00.878) 0:01:21.318 ********** 2026-04-17 01:06:09.158955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-17 01:06:09.158969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.158997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:06:09.159048 | orchestrator | 2026-04-17 01:06:09.159052 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-17 01:06:09.159056 | orchestrator | Friday 17 April 2026 01:04:44 +0000 (0:00:04.781) 0:01:26.099 ********** 2026-04-17 01:06:09.159060 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.159064 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.159067 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.159071 | orchestrator | 2026-04-17 01:06:09.159077 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-17 01:06:09.159085 | orchestrator | Friday 17 April 2026 01:04:45 +0000 (0:00:00.432) 0:01:26.532 ********** 2026-04-17 01:06:09.159091 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-17 01:06:09.159098 | orchestrator | 2026-04-17 01:06:09.159104 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-17 01:06:09.159111 | orchestrator | Friday 17 April 2026 01:04:47 +0000 (0:00:02.481) 0:01:29.013 ********** 2026-04-17 01:06:09.159117 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 01:06:09.159124 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-17 01:06:09.159130 | orchestrator | 2026-04-17 01:06:09.159136 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-17 01:06:09.159143 | orchestrator | Friday 17 April 2026 01:04:50 +0000 (0:00:02.888) 0:01:31.901 ********** 2026-04-17 01:06:09.159149 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159156 | orchestrator | 2026-04-17 01:06:09.159163 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 01:06:09.159170 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:15.954) 0:01:47.856 ********** 2026-04-17 01:06:09.159176 | orchestrator | 2026-04-17 01:06:09.159180 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 01:06:09.159184 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:00.064) 0:01:47.920 ********** 2026-04-17 01:06:09.159188 | orchestrator | 2026-04-17 01:06:09.159192 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-17 01:06:09.159237 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:00.062) 0:01:47.983 ********** 2026-04-17 01:06:09.159245 | orchestrator | 2026-04-17 01:06:09.159252 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-17 01:06:09.159258 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:00.064) 0:01:48.047 ********** 2026-04-17 01:06:09.159265 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159271 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159278 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159282 | orchestrator | 2026-04-17 01:06:09.159286 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-17 01:06:09.159290 | orchestrator | Friday 17 April 2026 01:05:15 +0000 (0:00:08.311) 0:01:56.359 ********** 2026-04-17 01:06:09.159294 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159298 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159301 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159305 | orchestrator | 2026-04-17 01:06:09.159309 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-17 01:06:09.159313 | orchestrator | Friday 17 April 2026 01:05:29 +0000 (0:00:14.135) 0:02:10.494 ********** 2026-04-17 01:06:09.159317 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159329 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159337 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159349 | orchestrator | 2026-04-17 01:06:09.159356 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-17 01:06:09.159362 | orchestrator | Friday 17 April 2026 01:05:36 +0000 (0:00:07.450) 0:02:17.945 ********** 2026-04-17 01:06:09.159368 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159375 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159380 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159384 | orchestrator | 2026-04-17 01:06:09.159388 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-17 01:06:09.159392 | orchestrator | Friday 17 April 2026 01:05:47 +0000 (0:00:10.444) 0:02:28.389 ********** 2026-04-17 01:06:09.159395 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159401 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159409 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159418 | orchestrator | 2026-04-17 01:06:09.159424 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-17 01:06:09.159430 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:06.318) 0:02:34.707 ********** 2026-04-17 01:06:09.159437 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159443 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.159449 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.159455 | orchestrator | 2026-04-17 01:06:09.159460 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-17 01:06:09.159466 | orchestrator | Friday 17 April 2026 01:06:01 +0000 (0:00:07.710) 0:02:42.418 ********** 2026-04-17 01:06:09.159472 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.159478 | orchestrator | 2026-04-17 01:06:09.159484 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:06:09.159490 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:06:09.159498 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:06:09.159504 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:06:09.159511 | orchestrator | 2026-04-17 01:06:09.159517 | orchestrator | 2026-04-17 01:06:09.159523 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:06:09.159530 | orchestrator | Friday 17 April 2026 01:06:07 +0000 (0:00:06.645) 0:02:49.063 ********** 2026-04-17 01:06:09.159537 | orchestrator | =============================================================================== 2026-04-17 01:06:09.159544 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.95s 2026-04-17 01:06:09.159550 | orchestrator | designate : Restart designate-api container ---------------------------- 14.14s 2026-04-17 01:06:09.159610 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.06s 2026-04-17 01:06:09.159618 | orchestrator | designate : Restart designate-producer container ----------------------- 10.44s 2026-04-17 01:06:09.159624 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.31s 2026-04-17 01:06:09.159681 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.05s 2026-04-17 01:06:09.159688 | orchestrator | designate : Restart designate-worker container -------------------------- 7.71s 2026-04-17 01:06:09.159695 | orchestrator | designate : Restart designate-central container ------------------------- 7.45s 2026-04-17 01:06:09.159746 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.38s 2026-04-17 01:06:09.159753 | orchestrator | designate : Copying over config.json files for services ----------------- 6.74s 2026-04-17 01:06:09.159759 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.65s 2026-04-17 01:06:09.159766 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.32s 2026-04-17 01:06:09.159773 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.12s 2026-04-17 01:06:09.159939 | orchestrator | designate : Check designate containers ---------------------------------- 4.78s 2026-04-17 01:06:09.159946 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.28s 2026-04-17 01:06:09.159950 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.27s 2026-04-17 01:06:09.159954 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.15s 2026-04-17 01:06:09.159960 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.01s 2026-04-17 01:06:09.159965 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.77s 2026-04-17 01:06:09.159968 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.54s 2026-04-17 01:06:09.159972 | orchestrator | 2026-04-17 01:06:09 | INFO  | Task f3a4bfac-99a7-4ec6-be2b-e3b0df2b4cbe is in state SUCCESS 2026-04-17 01:06:09.159980 | orchestrator | 2026-04-17 01:06:09.159984 | orchestrator | 2026-04-17 01:06:09.159988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:06:09.159991 | orchestrator | 2026-04-17 01:06:09.159995 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:06:09.159999 | orchestrator | Friday 17 April 2026 01:01:55 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-04-17 01:06:09.160003 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:09.160007 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:09.160010 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:09.160014 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:06:09.160018 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:06:09.160022 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:06:09.160026 | orchestrator | 2026-04-17 01:06:09.160029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:06:09.160033 | orchestrator | Friday 17 April 2026 01:01:56 +0000 (0:00:00.536) 0:00:00.862 ********** 2026-04-17 01:06:09.160037 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-17 01:06:09.160041 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-17 01:06:09.160045 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-17 01:06:09.160048 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-17 01:06:09.160052 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-17 01:06:09.160056 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-17 01:06:09.160060 | orchestrator | 2026-04-17 01:06:09.160064 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-17 01:06:09.160067 | orchestrator | 2026-04-17 01:06:09.160071 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 01:06:09.160075 | orchestrator | Friday 17 April 2026 01:01:56 +0000 (0:00:00.515) 0:00:01.377 ********** 2026-04-17 01:06:09.160079 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:06:09.160083 | orchestrator | 2026-04-17 01:06:09.160087 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-17 01:06:09.160091 | orchestrator | Friday 17 April 2026 01:01:57 +0000 (0:00:00.827) 0:00:02.205 ********** 2026-04-17 01:06:09.160094 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:09.160098 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:09.160102 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:09.160106 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:06:09.160109 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:06:09.160113 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:06:09.160117 | orchestrator | 2026-04-17 01:06:09.160121 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-17 01:06:09.160124 | orchestrator | Friday 17 April 2026 01:01:59 +0000 (0:00:01.357) 0:00:03.563 ********** 2026-04-17 01:06:09.160128 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:09.160135 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:06:09.160139 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:09.160142 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:09.160146 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:06:09.160150 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:06:09.160154 | orchestrator | 2026-04-17 01:06:09.160157 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-17 01:06:09.160161 | orchestrator | Friday 17 April 2026 01:02:00 +0000 (0:00:01.170) 0:00:04.733 ********** 2026-04-17 01:06:09.160165 | orchestrator | ok: [testbed-node-0] => { 2026-04-17 01:06:09.160169 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160173 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160177 | orchestrator | } 2026-04-17 01:06:09.160182 | orchestrator | ok: [testbed-node-1] => { 2026-04-17 01:06:09.160189 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160211 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160219 | orchestrator | } 2026-04-17 01:06:09.160225 | orchestrator | ok: [testbed-node-2] => { 2026-04-17 01:06:09.160231 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160238 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160244 | orchestrator | } 2026-04-17 01:06:09.160250 | orchestrator | ok: [testbed-node-3] => { 2026-04-17 01:06:09.160255 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160261 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160268 | orchestrator | } 2026-04-17 01:06:09.160275 | orchestrator | ok: [testbed-node-4] => { 2026-04-17 01:06:09.160281 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160288 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160295 | orchestrator | } 2026-04-17 01:06:09.160302 | orchestrator | ok: [testbed-node-5] => { 2026-04-17 01:06:09.160309 | orchestrator |  "changed": false, 2026-04-17 01:06:09.160313 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:06:09.160316 | orchestrator | } 2026-04-17 01:06:09.160320 | orchestrator | 2026-04-17 01:06:09.160324 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-17 01:06:09.160328 | orchestrator | Friday 17 April 2026 01:02:00 +0000 (0:00:00.563) 0:00:05.296 ********** 2026-04-17 01:06:09.160331 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.160335 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.160339 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.160342 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.160346 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.160350 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.160354 | orchestrator | 2026-04-17 01:06:09.160358 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-17 01:06:09.160361 | orchestrator | Friday 17 April 2026 01:02:01 +0000 (0:00:00.582) 0:00:05.879 ********** 2026-04-17 01:06:09.160365 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-17 01:06:09.160369 | orchestrator | 2026-04-17 01:06:09.160373 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-17 01:06:09.160380 | orchestrator | Friday 17 April 2026 01:02:05 +0000 (0:00:03.659) 0:00:09.539 ********** 2026-04-17 01:06:09.160384 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-17 01:06:09.160388 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-17 01:06:09.160392 | orchestrator | 2026-04-17 01:06:09.160400 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-17 01:06:09.160404 | orchestrator | Friday 17 April 2026 01:02:11 +0000 (0:00:06.130) 0:00:15.669 ********** 2026-04-17 01:06:09.160408 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:06:09.160412 | orchestrator | 2026-04-17 01:06:09.160415 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-17 01:06:09.160419 | orchestrator | Friday 17 April 2026 01:02:14 +0000 (0:00:03.398) 0:00:19.068 ********** 2026-04-17 01:06:09.160427 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-17 01:06:09.160431 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:06:09.160434 | orchestrator | 2026-04-17 01:06:09.160438 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-17 01:06:09.160442 | orchestrator | Friday 17 April 2026 01:02:19 +0000 (0:00:04.824) 0:00:23.893 ********** 2026-04-17 01:06:09.160446 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:06:09.160450 | orchestrator | 2026-04-17 01:06:09.160453 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-17 01:06:09.160457 | orchestrator | Friday 17 April 2026 01:02:23 +0000 (0:00:03.632) 0:00:27.525 ********** 2026-04-17 01:06:09.160461 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-17 01:06:09.160621 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-17 01:06:09.160627 | orchestrator | 2026-04-17 01:06:09.160631 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 01:06:09.160635 | orchestrator | Friday 17 April 2026 01:02:30 +0000 (0:00:07.484) 0:00:35.010 ********** 2026-04-17 01:06:09.160639 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.160643 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.160646 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.160650 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.160654 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.160658 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.160661 | orchestrator | 2026-04-17 01:06:09.160665 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-17 01:06:09.160669 | orchestrator | Friday 17 April 2026 01:02:31 +0000 (0:00:00.539) 0:00:35.549 ********** 2026-04-17 01:06:09.160673 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.160676 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.160680 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.160684 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.160688 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.160692 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.160695 | orchestrator | 2026-04-17 01:06:09.160699 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-17 01:06:09.160703 | orchestrator | Friday 17 April 2026 01:02:33 +0000 (0:00:02.343) 0:00:37.893 ********** 2026-04-17 01:06:09.160707 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:09.160711 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:09.160714 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:09.160718 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:06:09.160722 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:06:09.160726 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:06:09.160730 | orchestrator | 2026-04-17 01:06:09.160733 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-17 01:06:09.160737 | orchestrator | Friday 17 April 2026 01:02:34 +0000 (0:00:00.917) 0:00:38.810 ********** 2026-04-17 01:06:09.160741 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.160745 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.160749 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.160752 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.160756 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.160760 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.160763 | orchestrator | 2026-04-17 01:06:09.160767 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-17 01:06:09.160771 | orchestrator | Friday 17 April 2026 01:02:36 +0000 (0:00:01.954) 0:00:40.765 ********** 2026-04-17 01:06:09.160776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160814 | orchestrator | 2026-04-17 01:06:09.160818 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-17 01:06:09.160822 | orchestrator | Friday 17 April 2026 01:02:39 +0000 (0:00:03.061) 0:00:43.827 ********** 2026-04-17 01:06:09.160826 | orchestrator | [WARNING]: Skipped 2026-04-17 01:06:09.160830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-17 01:06:09.160836 | orchestrator | due to this access issue: 2026-04-17 01:06:09.160839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-17 01:06:09.160843 | orchestrator | a directory 2026-04-17 01:06:09.160847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:06:09.160851 | orchestrator | 2026-04-17 01:06:09.160855 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 01:06:09.160861 | orchestrator | Friday 17 April 2026 01:02:40 +0000 (0:00:00.751) 0:00:44.579 ********** 2026-04-17 01:06:09.160865 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:06:09.160870 | orchestrator | 2026-04-17 01:06:09.160873 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-17 01:06:09.160877 | orchestrator | Friday 17 April 2026 01:02:41 +0000 (0:00:01.019) 0:00:45.598 ********** 2026-04-17 01:06:09.160881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.160898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.160915 | orchestrator | 2026-04-17 01:06:09.160919 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-17 01:06:09.160923 | orchestrator | Friday 17 April 2026 01:02:44 +0000 (0:00:02.818) 0:00:48.416 ********** 2026-04-17 01:06:09.160927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.160934 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.160938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.160942 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.160947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.160953 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.160958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.160962 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.160966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.160970 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.160973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.160980 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.160984 | orchestrator | 2026-04-17 01:06:09.160988 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-17 01:06:09.160991 | orchestrator | Friday 17 April 2026 01:02:46 +0000 (0:00:02.732) 0:00:51.149 ********** 2026-04-17 01:06:09.160995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.160999 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161013 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161020 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161031 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161039 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161047 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161050 | orchestrator | 2026-04-17 01:06:09.161054 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-17 01:06:09.161058 | orchestrator | Friday 17 April 2026 01:02:49 +0000 (0:00:02.925) 0:00:54.074 ********** 2026-04-17 01:06:09.161064 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161067 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161071 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161075 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161079 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161082 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161086 | orchestrator | 2026-04-17 01:06:09.161090 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-17 01:06:09.161096 | orchestrator | Friday 17 April 2026 01:02:51 +0000 (0:00:01.972) 0:00:56.046 ********** 2026-04-17 01:06:09.161100 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161104 | orchestrator | 2026-04-17 01:06:09.161107 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-17 01:06:09.161111 | orchestrator | Friday 17 April 2026 01:02:51 +0000 (0:00:00.207) 0:00:56.254 ********** 2026-04-17 01:06:09.161115 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161119 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161123 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161320 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161333 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161337 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161341 | orchestrator | 2026-04-17 01:06:09.161344 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-17 01:06:09.161353 | orchestrator | Friday 17 April 2026 01:02:52 +0000 (0:00:00.477) 0:00:56.731 ********** 2026-04-17 01:06:09.161358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161362 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161370 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161378 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161393 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161404 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161412 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161416 | orchestrator | 2026-04-17 01:06:09.161446 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-17 01:06:09.161451 | orchestrator | Friday 17 April 2026 01:02:55 +0000 (0:00:02.811) 0:00:59.543 ********** 2026-04-17 01:06:09.161455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161489 | orchestrator | 2026-04-17 01:06:09.161493 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-17 01:06:09.161497 | orchestrator | Friday 17 April 2026 01:02:58 +0000 (0:00:03.325) 0:01:02.869 ********** 2026-04-17 01:06:09.161500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.161532 | orchestrator | 2026-04-17 01:06:09.161536 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-17 01:06:09.161547 | orchestrator | Friday 17 April 2026 01:03:03 +0000 (0:00:05.416) 0:01:08.285 ********** 2026-04-17 01:06:09.161553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161558 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161565 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161573 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161581 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161593 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161603 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161607 | orchestrator | 2026-04-17 01:06:09.161611 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-17 01:06:09.161615 | orchestrator | Friday 17 April 2026 01:03:06 +0000 (0:00:02.213) 0:01:10.498 ********** 2026-04-17 01:06:09.161618 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.161622 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161626 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161630 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.161633 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161637 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.161641 | orchestrator | 2026-04-17 01:06:09.161645 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-17 01:06:09.161648 | orchestrator | Friday 17 April 2026 01:03:09 +0000 (0:00:03.721) 0:01:14.220 ********** 2026-04-17 01:06:09.161652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161656 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161664 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.161675 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.161696 | orchestrator | 2026-04-17 01:06:09.161700 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-17 01:06:09.161704 | orchestrator | Friday 17 April 2026 01:03:13 +0000 (0:00:03.875) 0:01:18.097 ********** 2026-04-17 01:06:09.161708 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161711 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161715 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161719 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161726 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161729 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161733 | orchestrator | 2026-04-17 01:06:09.161737 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-17 01:06:09.161741 | orchestrator | Friday 17 April 2026 01:03:16 +0000 (0:00:02.851) 0:01:20.948 ********** 2026-04-17 01:06:09.161744 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161748 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161752 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161756 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161759 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161763 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161767 | orchestrator | 2026-04-17 01:06:09.161771 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-17 01:06:09.161774 | orchestrator | Friday 17 April 2026 01:03:18 +0000 (0:00:02.090) 0:01:23.038 ********** 2026-04-17 01:06:09.161778 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161782 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161786 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161789 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161793 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161797 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161800 | orchestrator | 2026-04-17 01:06:09.161804 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-17 01:06:09.161808 | orchestrator | Friday 17 April 2026 01:03:20 +0000 (0:00:02.179) 0:01:25.218 ********** 2026-04-17 01:06:09.161812 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161815 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161819 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161823 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161827 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161832 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161836 | orchestrator | 2026-04-17 01:06:09.161840 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-17 01:06:09.161844 | orchestrator | Friday 17 April 2026 01:03:22 +0000 (0:00:01.769) 0:01:26.988 ********** 2026-04-17 01:06:09.161847 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161851 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161855 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161859 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161864 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161868 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161872 | orchestrator | 2026-04-17 01:06:09.161876 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-17 01:06:09.161880 | orchestrator | Friday 17 April 2026 01:03:24 +0000 (0:00:01.754) 0:01:28.742 ********** 2026-04-17 01:06:09.161883 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161891 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161895 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161898 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161902 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161906 | orchestrator | 2026-04-17 01:06:09.161909 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-17 01:06:09.161913 | orchestrator | Friday 17 April 2026 01:03:26 +0000 (0:00:01.834) 0:01:30.576 ********** 2026-04-17 01:06:09.161917 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161921 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161925 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161928 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161932 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161939 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.161943 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161946 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.161950 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161954 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.161957 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-17 01:06:09.161961 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.161965 | orchestrator | 2026-04-17 01:06:09.161969 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-17 01:06:09.161972 | orchestrator | Friday 17 April 2026 01:03:28 +0000 (0:00:02.365) 0:01:32.941 ********** 2026-04-17 01:06:09.161976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161980 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.161984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.161988 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.161996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162000 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162011 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162035 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162043 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162047 | orchestrator | 2026-04-17 01:06:09.162051 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-17 01:06:09.162054 | orchestrator | Friday 17 April 2026 01:03:30 +0000 (0:00:02.192) 0:01:35.134 ********** 2026-04-17 01:06:09.162058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162062 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162078 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162086 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162094 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162103 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162115 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162119 | orchestrator | 2026-04-17 01:06:09.162123 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-17 01:06:09.162132 | orchestrator | Friday 17 April 2026 01:03:32 +0000 (0:00:02.006) 0:01:37.141 ********** 2026-04-17 01:06:09.162137 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162143 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162148 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162152 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162157 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162161 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162166 | orchestrator | 2026-04-17 01:06:09.162170 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-17 01:06:09.162175 | orchestrator | Friday 17 April 2026 01:03:35 +0000 (0:00:02.290) 0:01:39.431 ********** 2026-04-17 01:06:09.162179 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162184 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162188 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162192 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:06:09.162210 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:06:09.162217 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:06:09.162224 | orchestrator | 2026-04-17 01:06:09.162230 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-17 01:06:09.162236 | orchestrator | Friday 17 April 2026 01:03:38 +0000 (0:00:03.343) 0:01:42.775 ********** 2026-04-17 01:06:09.162241 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162246 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162251 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162255 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162260 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162264 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162268 | orchestrator | 2026-04-17 01:06:09.162273 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-17 01:06:09.162277 | orchestrator | Friday 17 April 2026 01:03:42 +0000 (0:00:03.713) 0:01:46.488 ********** 2026-04-17 01:06:09.162281 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162285 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162289 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162292 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162296 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162300 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162304 | orchestrator | 2026-04-17 01:06:09.162307 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-17 01:06:09.162311 | orchestrator | Friday 17 April 2026 01:03:44 +0000 (0:00:02.017) 0:01:48.506 ********** 2026-04-17 01:06:09.162315 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162319 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162322 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162326 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162330 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162333 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162337 | orchestrator | 2026-04-17 01:06:09.162341 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-17 01:06:09.162345 | orchestrator | Friday 17 April 2026 01:03:46 +0000 (0:00:02.043) 0:01:50.550 ********** 2026-04-17 01:06:09.162348 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162352 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162356 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162360 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162363 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162367 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162374 | orchestrator | 2026-04-17 01:06:09.162382 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-17 01:06:09.162390 | orchestrator | Friday 17 April 2026 01:03:48 +0000 (0:00:02.425) 0:01:52.976 ********** 2026-04-17 01:06:09.162403 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162409 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162415 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162422 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162429 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162435 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162442 | orchestrator | 2026-04-17 01:06:09.162448 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-17 01:06:09.162454 | orchestrator | Friday 17 April 2026 01:03:51 +0000 (0:00:02.517) 0:01:55.494 ********** 2026-04-17 01:06:09.162461 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162468 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162474 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162480 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162483 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162487 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162491 | orchestrator | 2026-04-17 01:06:09.162495 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-17 01:06:09.162498 | orchestrator | Friday 17 April 2026 01:03:52 +0000 (0:00:01.624) 0:01:57.118 ********** 2026-04-17 01:06:09.162502 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162506 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162510 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162513 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162517 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162521 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162524 | orchestrator | 2026-04-17 01:06:09.162528 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-17 01:06:09.162532 | orchestrator | Friday 17 April 2026 01:03:56 +0000 (0:00:03.631) 0:02:00.750 ********** 2026-04-17 01:06:09.162536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162540 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162544 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162548 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162555 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162558 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162562 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162566 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162573 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162577 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162581 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-17 01:06:09.162584 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162588 | orchestrator | 2026-04-17 01:06:09.162592 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-17 01:06:09.162596 | orchestrator | Friday 17 April 2026 01:03:58 +0000 (0:00:02.330) 0:02:03.081 ********** 2026-04-17 01:06:09.162600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162607 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162615 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-17 01:06:09.162623 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162633 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162643 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-17 01:06:09.162654 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162658 | orchestrator | 2026-04-17 01:06:09.162661 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-17 01:06:09.162665 | orchestrator | Friday 17 April 2026 01:04:01 +0000 (0:00:02.343) 0:02:05.424 ********** 2026-04-17 01:06:09.162669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.162674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.162682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-17 01:06:09.162686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.162693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.162697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-17 01:06:09.162701 | orchestrator | 2026-04-17 01:06:09.162707 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-17 01:06:09.162715 | orchestrator | Friday 17 April 2026 01:04:03 +0000 (0:00:02.772) 0:02:08.197 ********** 2026-04-17 01:06:09.162724 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:09.162729 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:09.162735 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:09.162741 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:06:09.162747 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:06:09.162752 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:06:09.162759 | orchestrator | 2026-04-17 01:06:09.162765 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-17 01:06:09.162771 | orchestrator | Friday 17 April 2026 01:04:04 +0000 (0:00:00.704) 0:02:08.902 ********** 2026-04-17 01:06:09.162778 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.162784 | orchestrator | 2026-04-17 01:06:09.162791 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-17 01:06:09.162795 | orchestrator | Friday 17 April 2026 01:04:06 +0000 (0:00:02.426) 0:02:11.329 ********** 2026-04-17 01:06:09.162799 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.162803 | orchestrator | 2026-04-17 01:06:09.162807 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-17 01:06:09.162810 | orchestrator | Friday 17 April 2026 01:04:09 +0000 (0:00:02.877) 0:02:14.207 ********** 2026-04-17 01:06:09.162814 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.162818 | orchestrator | 2026-04-17 01:06:09.162821 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162825 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:38.372) 0:02:52.579 ********** 2026-04-17 01:06:09.162829 | orchestrator | 2026-04-17 01:06:09.162833 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162836 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.074) 0:02:52.654 ********** 2026-04-17 01:06:09.162840 | orchestrator | 2026-04-17 01:06:09.162851 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162855 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.062) 0:02:52.716 ********** 2026-04-17 01:06:09.162859 | orchestrator | 2026-04-17 01:06:09.162863 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162866 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.062) 0:02:52.778 ********** 2026-04-17 01:06:09.162870 | orchestrator | 2026-04-17 01:06:09.162877 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162881 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.063) 0:02:52.842 ********** 2026-04-17 01:06:09.162884 | orchestrator | 2026-04-17 01:06:09.162888 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-17 01:06:09.162892 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.063) 0:02:52.905 ********** 2026-04-17 01:06:09.162896 | orchestrator | 2026-04-17 01:06:09.162900 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-17 01:06:09.162903 | orchestrator | Friday 17 April 2026 01:04:48 +0000 (0:00:00.066) 0:02:52.972 ********** 2026-04-17 01:06:09.162907 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:09.162911 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:09.162915 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:09.162918 | orchestrator | 2026-04-17 01:06:09.162922 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-17 01:06:09.162926 | orchestrator | Friday 17 April 2026 01:05:12 +0000 (0:00:23.810) 0:03:16.782 ********** 2026-04-17 01:06:09.162930 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:06:09.162933 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:06:09.162937 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:06:09.162941 | orchestrator | 2026-04-17 01:06:09.162945 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:06:09.162948 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 01:06:09.162953 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-17 01:06:09.162957 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-17 01:06:09.162961 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 01:06:09.162964 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 01:06:09.162968 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-17 01:06:09.162972 | orchestrator | 2026-04-17 01:06:09.162976 | orchestrator | 2026-04-17 01:06:09.162980 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:06:09.162983 | orchestrator | Friday 17 April 2026 01:06:07 +0000 (0:00:54.932) 0:04:11.715 ********** 2026-04-17 01:06:09.162987 | orchestrator | =============================================================================== 2026-04-17 01:06:09.162991 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.93s 2026-04-17 01:06:09.162995 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.37s 2026-04-17 01:06:09.162998 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.81s 2026-04-17 01:06:09.163002 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.48s 2026-04-17 01:06:09.163006 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.13s 2026-04-17 01:06:09.163010 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.42s 2026-04-17 01:06:09.163016 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.82s 2026-04-17 01:06:09.163020 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.88s 2026-04-17 01:06:09.163024 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.72s 2026-04-17 01:06:09.163028 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.71s 2026-04-17 01:06:09.163031 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.66s 2026-04-17 01:06:09.163035 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.63s 2026-04-17 01:06:09.163039 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.63s 2026-04-17 01:06:09.163043 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.40s 2026-04-17 01:06:09.163046 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.34s 2026-04-17 01:06:09.163050 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.33s 2026-04-17 01:06:09.163054 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.06s 2026-04-17 01:06:09.163058 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.92s 2026-04-17 01:06:09.163061 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.88s 2026-04-17 01:06:09.163065 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.85s 2026-04-17 01:06:09.163071 | orchestrator | 2026-04-17 01:06:09 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:09.163393 | orchestrator | 2026-04-17 01:06:09 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:09.165036 | orchestrator | 2026-04-17 01:06:09 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:09.167467 | orchestrator | 2026-04-17 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:12.208708 | orchestrator | 2026-04-17 01:06:12 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:12.208794 | orchestrator | 2026-04-17 01:06:12 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:12.208804 | orchestrator | 2026-04-17 01:06:12 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:12.208811 | orchestrator | 2026-04-17 01:06:12 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:12.209064 | orchestrator | 2026-04-17 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:15.250586 | orchestrator | 2026-04-17 01:06:15 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:15.253265 | orchestrator | 2026-04-17 01:06:15 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:15.253339 | orchestrator | 2026-04-17 01:06:15 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:15.255286 | orchestrator | 2026-04-17 01:06:15 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:15.255344 | orchestrator | 2026-04-17 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:18.285830 | orchestrator | 2026-04-17 01:06:18 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:18.286153 | orchestrator | 2026-04-17 01:06:18 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:18.286882 | orchestrator | 2026-04-17 01:06:18 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:18.287398 | orchestrator | 2026-04-17 01:06:18 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:18.287453 | orchestrator | 2026-04-17 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:21.325767 | orchestrator | 2026-04-17 01:06:21 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:21.326273 | orchestrator | 2026-04-17 01:06:21 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:21.327581 | orchestrator | 2026-04-17 01:06:21 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:21.329695 | orchestrator | 2026-04-17 01:06:21 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:21.329740 | orchestrator | 2026-04-17 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:24.373989 | orchestrator | 2026-04-17 01:06:24 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:24.379281 | orchestrator | 2026-04-17 01:06:24 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:24.379916 | orchestrator | 2026-04-17 01:06:24 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:24.382163 | orchestrator | 2026-04-17 01:06:24 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:24.382449 | orchestrator | 2026-04-17 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:27.425316 | orchestrator | 2026-04-17 01:06:27 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:27.426458 | orchestrator | 2026-04-17 01:06:27 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:27.427876 | orchestrator | 2026-04-17 01:06:27 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:27.429000 | orchestrator | 2026-04-17 01:06:27 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:27.429051 | orchestrator | 2026-04-17 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:30.477378 | orchestrator | 2026-04-17 01:06:30 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:30.479968 | orchestrator | 2026-04-17 01:06:30 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:30.481137 | orchestrator | 2026-04-17 01:06:30 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:30.482828 | orchestrator | 2026-04-17 01:06:30 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:30.483077 | orchestrator | 2026-04-17 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:33.524079 | orchestrator | 2026-04-17 01:06:33 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:33.524991 | orchestrator | 2026-04-17 01:06:33 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:33.526083 | orchestrator | 2026-04-17 01:06:33 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state STARTED 2026-04-17 01:06:33.527124 | orchestrator | 2026-04-17 01:06:33 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:33.527394 | orchestrator | 2026-04-17 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:36.553279 | orchestrator | 2026-04-17 01:06:36 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:36.554580 | orchestrator | 2026-04-17 01:06:36 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:36.556027 | orchestrator | 2026-04-17 01:06:36 | INFO  | Task 89c9976b-39e0-4b01-9e22-6f7d1cedd579 is in state SUCCESS 2026-04-17 01:06:36.557153 | orchestrator | 2026-04-17 01:06:36.557199 | orchestrator | 2026-04-17 01:06:36.557248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:06:36.557258 | orchestrator | 2026-04-17 01:06:36.557265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:06:36.557272 | orchestrator | Friday 17 April 2026 01:05:27 +0000 (0:00:00.316) 0:00:00.316 ********** 2026-04-17 01:06:36.557278 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:06:36.557286 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:06:36.557292 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:06:36.557298 | orchestrator | 2026-04-17 01:06:36.557305 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:06:36.557312 | orchestrator | Friday 17 April 2026 01:05:27 +0000 (0:00:00.284) 0:00:00.600 ********** 2026-04-17 01:06:36.557319 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-17 01:06:36.557326 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-17 01:06:36.557333 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-17 01:06:36.557339 | orchestrator | 2026-04-17 01:06:36.557346 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-17 01:06:36.557352 | orchestrator | 2026-04-17 01:06:36.557359 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 01:06:36.557372 | orchestrator | Friday 17 April 2026 01:05:28 +0000 (0:00:00.288) 0:00:00.888 ********** 2026-04-17 01:06:36.557475 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:06:36.557482 | orchestrator | 2026-04-17 01:06:36.557486 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-17 01:06:36.557491 | orchestrator | Friday 17 April 2026 01:05:29 +0000 (0:00:00.802) 0:00:01.690 ********** 2026-04-17 01:06:36.557495 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-17 01:06:36.557499 | orchestrator | 2026-04-17 01:06:36.557503 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-17 01:06:36.557507 | orchestrator | Friday 17 April 2026 01:05:33 +0000 (0:00:04.060) 0:00:05.751 ********** 2026-04-17 01:06:36.557512 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-17 01:06:36.557516 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-17 01:06:36.557520 | orchestrator | 2026-04-17 01:06:36.557523 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-17 01:06:36.557527 | orchestrator | Friday 17 April 2026 01:05:40 +0000 (0:00:07.609) 0:00:13.360 ********** 2026-04-17 01:06:36.557531 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:06:36.557535 | orchestrator | 2026-04-17 01:06:36.557539 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-17 01:06:36.557543 | orchestrator | Friday 17 April 2026 01:05:43 +0000 (0:00:02.799) 0:00:16.160 ********** 2026-04-17 01:06:36.557546 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-17 01:06:36.557550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:06:36.557554 | orchestrator | 2026-04-17 01:06:36.557557 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-17 01:06:36.557561 | orchestrator | Friday 17 April 2026 01:05:47 +0000 (0:00:03.885) 0:00:20.046 ********** 2026-04-17 01:06:36.557565 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:06:36.557569 | orchestrator | 2026-04-17 01:06:36.557573 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-17 01:06:36.557577 | orchestrator | Friday 17 April 2026 01:05:51 +0000 (0:00:03.807) 0:00:23.853 ********** 2026-04-17 01:06:36.557599 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-17 01:06:36.557603 | orchestrator | 2026-04-17 01:06:36.557607 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 01:06:36.557610 | orchestrator | Friday 17 April 2026 01:05:55 +0000 (0:00:03.905) 0:00:27.759 ********** 2026-04-17 01:06:36.557625 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.557629 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:36.557633 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:36.557637 | orchestrator | 2026-04-17 01:06:36.557640 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-17 01:06:36.557645 | orchestrator | Friday 17 April 2026 01:05:55 +0000 (0:00:00.561) 0:00:28.321 ********** 2026-04-17 01:06:36.557655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557698 | orchestrator | 2026-04-17 01:06:36.557704 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-17 01:06:36.557710 | orchestrator | Friday 17 April 2026 01:05:57 +0000 (0:00:01.813) 0:00:30.134 ********** 2026-04-17 01:06:36.557716 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.557722 | orchestrator | 2026-04-17 01:06:36.557728 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-17 01:06:36.557735 | orchestrator | Friday 17 April 2026 01:05:57 +0000 (0:00:00.083) 0:00:30.218 ********** 2026-04-17 01:06:36.557750 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.557756 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:36.557762 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:36.557768 | orchestrator | 2026-04-17 01:06:36.557775 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-17 01:06:36.557781 | orchestrator | Friday 17 April 2026 01:05:57 +0000 (0:00:00.235) 0:00:30.454 ********** 2026-04-17 01:06:36.557787 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:06:36.557795 | orchestrator | 2026-04-17 01:06:36.557801 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-17 01:06:36.557808 | orchestrator | Friday 17 April 2026 01:05:58 +0000 (0:00:00.504) 0:00:30.958 ********** 2026-04-17 01:06:36.557819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557848 | orchestrator | 2026-04-17 01:06:36.557855 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-17 01:06:36.557861 | orchestrator | Friday 17 April 2026 01:05:59 +0000 (0:00:01.590) 0:00:32.549 ********** 2026-04-17 01:06:36.557869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557877 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.557885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557889 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:36.557896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557900 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:36.557904 | orchestrator | 2026-04-17 01:06:36.557908 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-17 01:06:36.557912 | orchestrator | Friday 17 April 2026 01:06:00 +0000 (0:00:00.430) 0:00:32.979 ********** 2026-04-17 01:06:36.557916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557920 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.557927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557931 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:36.557938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.557942 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:36.557945 | orchestrator | 2026-04-17 01:06:36.557949 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-17 01:06:36.557953 | orchestrator | Friday 17 April 2026 01:06:00 +0000 (0:00:00.588) 0:00:33.567 ********** 2026-04-17 01:06:36.557960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557975 | orchestrator | 2026-04-17 01:06:36.557979 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-17 01:06:36.557983 | orchestrator | Friday 17 April 2026 01:06:02 +0000 (0:00:01.368) 0:00:34.936 ********** 2026-04-17 01:06:36.557990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.557994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.558002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.558007 | orchestrator | 2026-04-17 01:06:36.558011 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-17 01:06:36.558070 | orchestrator | Friday 17 April 2026 01:06:04 +0000 (0:00:02.679) 0:00:37.616 ********** 2026-04-17 01:06:36.558075 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 01:06:36.558079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 01:06:36.558083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-17 01:06:36.558086 | orchestrator | 2026-04-17 01:06:36.558090 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-17 01:06:36.558094 | orchestrator | Friday 17 April 2026 01:06:06 +0000 (0:00:01.561) 0:00:39.177 ********** 2026-04-17 01:06:36.558098 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:36.558101 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:36.558105 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:36.558110 | orchestrator | 2026-04-17 01:06:36.558116 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-17 01:06:36.558122 | orchestrator | Friday 17 April 2026 01:06:07 +0000 (0:00:01.323) 0:00:40.500 ********** 2026-04-17 01:06:36.558129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.558135 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:06:36.558146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.558152 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:06:36.558166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-17 01:06:36.558178 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:06:36.558183 | orchestrator | 2026-04-17 01:06:36.558188 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-17 01:06:36.558192 | orchestrator | Friday 17 April 2026 01:06:08 +0000 (0:00:00.838) 0:00:41.339 ********** 2026-04-17 01:06:36.558197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.558202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.558241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-17 01:06:36.558248 | orchestrator | 2026-04-17 01:06:36.558255 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-17 01:06:36.558262 | orchestrator | Friday 17 April 2026 01:06:09 +0000 (0:00:01.256) 0:00:42.596 ********** 2026-04-17 01:06:36.558269 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:36.558275 | orchestrator | 2026-04-17 01:06:36.558283 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-17 01:06:36.558287 | orchestrator | Friday 17 April 2026 01:06:12 +0000 (0:00:02.277) 0:00:44.874 ********** 2026-04-17 01:06:36.558291 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:36.558296 | orchestrator | 2026-04-17 01:06:36.558300 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-17 01:06:36.558305 | orchestrator | Friday 17 April 2026 01:06:14 +0000 (0:00:02.510) 0:00:47.384 ********** 2026-04-17 01:06:36.558310 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:36.558318 | orchestrator | 2026-04-17 01:06:36.558321 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 01:06:36.558325 | orchestrator | Friday 17 April 2026 01:06:28 +0000 (0:00:13.856) 0:01:01.241 ********** 2026-04-17 01:06:36.558329 | orchestrator | 2026-04-17 01:06:36.558333 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 01:06:36.558337 | orchestrator | Friday 17 April 2026 01:06:28 +0000 (0:00:00.062) 0:01:01.304 ********** 2026-04-17 01:06:36.558340 | orchestrator | 2026-04-17 01:06:36.558347 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-17 01:06:36.558351 | orchestrator | Friday 17 April 2026 01:06:28 +0000 (0:00:00.059) 0:01:01.363 ********** 2026-04-17 01:06:36.558355 | orchestrator | 2026-04-17 01:06:36.558359 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-17 01:06:36.558362 | orchestrator | Friday 17 April 2026 01:06:28 +0000 (0:00:00.064) 0:01:01.428 ********** 2026-04-17 01:06:36.558366 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:06:36.558370 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:06:36.558374 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:06:36.558378 | orchestrator | 2026-04-17 01:06:36.558381 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:06:36.558386 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:06:36.558392 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:06:36.558395 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:06:36.558399 | orchestrator | 2026-04-17 01:06:36.558403 | orchestrator | 2026-04-17 01:06:36.558407 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:06:36.558411 | orchestrator | Friday 17 April 2026 01:06:34 +0000 (0:00:05.691) 0:01:07.120 ********** 2026-04-17 01:06:36.558415 | orchestrator | =============================================================================== 2026-04-17 01:06:36.558418 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.86s 2026-04-17 01:06:36.558422 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.61s 2026-04-17 01:06:36.558426 | orchestrator | placement : Restart placement-api container ----------------------------- 5.69s 2026-04-17 01:06:36.558429 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.06s 2026-04-17 01:06:36.558433 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.91s 2026-04-17 01:06:36.558437 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.89s 2026-04-17 01:06:36.558441 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.81s 2026-04-17 01:06:36.558445 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.80s 2026-04-17 01:06:36.558448 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.68s 2026-04-17 01:06:36.558452 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.51s 2026-04-17 01:06:36.558456 | orchestrator | placement : Creating placement databases -------------------------------- 2.28s 2026-04-17 01:06:36.558460 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.81s 2026-04-17 01:06:36.558463 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.59s 2026-04-17 01:06:36.558467 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.56s 2026-04-17 01:06:36.558471 | orchestrator | placement : Copying over config.json files for services ----------------- 1.37s 2026-04-17 01:06:36.558475 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.32s 2026-04-17 01:06:36.558478 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2026-04-17 01:06:36.558486 | orchestrator | placement : Copying over existing policy file --------------------------- 0.84s 2026-04-17 01:06:36.558489 | orchestrator | placement : include_tasks ----------------------------------------------- 0.80s 2026-04-17 01:06:36.558493 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.59s 2026-04-17 01:06:36.558500 | orchestrator | 2026-04-17 01:06:36 | INFO  | Task 502e42fe-be47-471d-8ca1-5afd40a56146 is in state STARTED 2026-04-17 01:06:36.558738 | orchestrator | 2026-04-17 01:06:36 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:36.558814 | orchestrator | 2026-04-17 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:39.597613 | orchestrator | 2026-04-17 01:06:39 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:39.598808 | orchestrator | 2026-04-17 01:06:39 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:39.600694 | orchestrator | 2026-04-17 01:06:39 | INFO  | Task 502e42fe-be47-471d-8ca1-5afd40a56146 is in state STARTED 2026-04-17 01:06:39.602723 | orchestrator | 2026-04-17 01:06:39 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:39.602794 | orchestrator | 2026-04-17 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:42.645642 | orchestrator | 2026-04-17 01:06:42 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:42.645724 | orchestrator | 2026-04-17 01:06:42 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:42.645854 | orchestrator | 2026-04-17 01:06:42 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:42.646467 | orchestrator | 2026-04-17 01:06:42 | INFO  | Task 502e42fe-be47-471d-8ca1-5afd40a56146 is in state SUCCESS 2026-04-17 01:06:42.647232 | orchestrator | 2026-04-17 01:06:42 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:42.647259 | orchestrator | 2026-04-17 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:45.686507 | orchestrator | 2026-04-17 01:06:45 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:45.687640 | orchestrator | 2026-04-17 01:06:45 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:45.689511 | orchestrator | 2026-04-17 01:06:45 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:45.690969 | orchestrator | 2026-04-17 01:06:45 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:45.691015 | orchestrator | 2026-04-17 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:48.738338 | orchestrator | 2026-04-17 01:06:48 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:48.738937 | orchestrator | 2026-04-17 01:06:48 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:48.742460 | orchestrator | 2026-04-17 01:06:48 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:48.743923 | orchestrator | 2026-04-17 01:06:48 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:48.743963 | orchestrator | 2026-04-17 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:51.786964 | orchestrator | 2026-04-17 01:06:51 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:51.789007 | orchestrator | 2026-04-17 01:06:51 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:51.790907 | orchestrator | 2026-04-17 01:06:51 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:51.792754 | orchestrator | 2026-04-17 01:06:51 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:51.792828 | orchestrator | 2026-04-17 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:54.830181 | orchestrator | 2026-04-17 01:06:54 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:54.831373 | orchestrator | 2026-04-17 01:06:54 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:54.832513 | orchestrator | 2026-04-17 01:06:54 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:54.833681 | orchestrator | 2026-04-17 01:06:54 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:54.833709 | orchestrator | 2026-04-17 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:06:57.866262 | orchestrator | 2026-04-17 01:06:57 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:06:57.866946 | orchestrator | 2026-04-17 01:06:57 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:06:57.867882 | orchestrator | 2026-04-17 01:06:57 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:06:57.868812 | orchestrator | 2026-04-17 01:06:57 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:06:57.868835 | orchestrator | 2026-04-17 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:00.902330 | orchestrator | 2026-04-17 01:07:00 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:00.903427 | orchestrator | 2026-04-17 01:07:00 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:00.905954 | orchestrator | 2026-04-17 01:07:00 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:00.906602 | orchestrator | 2026-04-17 01:07:00 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:00.906641 | orchestrator | 2026-04-17 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:03.950987 | orchestrator | 2026-04-17 01:07:03 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:03.952325 | orchestrator | 2026-04-17 01:07:03 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:03.954649 | orchestrator | 2026-04-17 01:07:03 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:03.955900 | orchestrator | 2026-04-17 01:07:03 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:03.955946 | orchestrator | 2026-04-17 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:06.985558 | orchestrator | 2026-04-17 01:07:06 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:06.987261 | orchestrator | 2026-04-17 01:07:06 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:06.987297 | orchestrator | 2026-04-17 01:07:06 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:06.987668 | orchestrator | 2026-04-17 01:07:06 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:06.987688 | orchestrator | 2026-04-17 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:10.031967 | orchestrator | 2026-04-17 01:07:10 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:10.034937 | orchestrator | 2026-04-17 01:07:10 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:10.035559 | orchestrator | 2026-04-17 01:07:10 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:10.036046 | orchestrator | 2026-04-17 01:07:10 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:10.036065 | orchestrator | 2026-04-17 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:13.071513 | orchestrator | 2026-04-17 01:07:13 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:13.071606 | orchestrator | 2026-04-17 01:07:13 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:13.072464 | orchestrator | 2026-04-17 01:07:13 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:13.073341 | orchestrator | 2026-04-17 01:07:13 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:13.073512 | orchestrator | 2026-04-17 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:16.112011 | orchestrator | 2026-04-17 01:07:16 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:16.116593 | orchestrator | 2026-04-17 01:07:16 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:16.119194 | orchestrator | 2026-04-17 01:07:16 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:16.120997 | orchestrator | 2026-04-17 01:07:16 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:16.121055 | orchestrator | 2026-04-17 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:19.168755 | orchestrator | 2026-04-17 01:07:19 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:19.170058 | orchestrator | 2026-04-17 01:07:19 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:19.171906 | orchestrator | 2026-04-17 01:07:19 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:19.173571 | orchestrator | 2026-04-17 01:07:19 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:19.173952 | orchestrator | 2026-04-17 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:22.222926 | orchestrator | 2026-04-17 01:07:22 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:22.224718 | orchestrator | 2026-04-17 01:07:22 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:22.226712 | orchestrator | 2026-04-17 01:07:22 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:22.228321 | orchestrator | 2026-04-17 01:07:22 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:22.228365 | orchestrator | 2026-04-17 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:25.269482 | orchestrator | 2026-04-17 01:07:25 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:25.271322 | orchestrator | 2026-04-17 01:07:25 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:25.273530 | orchestrator | 2026-04-17 01:07:25 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:25.275175 | orchestrator | 2026-04-17 01:07:25 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:25.275332 | orchestrator | 2026-04-17 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:28.322408 | orchestrator | 2026-04-17 01:07:28 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:28.323670 | orchestrator | 2026-04-17 01:07:28 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:28.325434 | orchestrator | 2026-04-17 01:07:28 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:28.327294 | orchestrator | 2026-04-17 01:07:28 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:28.327339 | orchestrator | 2026-04-17 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:31.367876 | orchestrator | 2026-04-17 01:07:31 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:31.368882 | orchestrator | 2026-04-17 01:07:31 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:31.370091 | orchestrator | 2026-04-17 01:07:31 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:31.371210 | orchestrator | 2026-04-17 01:07:31 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:31.371262 | orchestrator | 2026-04-17 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:34.406313 | orchestrator | 2026-04-17 01:07:34 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:34.406649 | orchestrator | 2026-04-17 01:07:34 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:34.408377 | orchestrator | 2026-04-17 01:07:34 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:34.409353 | orchestrator | 2026-04-17 01:07:34 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:34.409379 | orchestrator | 2026-04-17 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:37.442704 | orchestrator | 2026-04-17 01:07:37 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:37.444986 | orchestrator | 2026-04-17 01:07:37 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:37.447209 | orchestrator | 2026-04-17 01:07:37 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:37.449259 | orchestrator | 2026-04-17 01:07:37 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:37.449307 | orchestrator | 2026-04-17 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:40.494306 | orchestrator | 2026-04-17 01:07:40 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:40.495818 | orchestrator | 2026-04-17 01:07:40 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:40.497533 | orchestrator | 2026-04-17 01:07:40 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:40.499413 | orchestrator | 2026-04-17 01:07:40 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:40.499455 | orchestrator | 2026-04-17 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:43.542802 | orchestrator | 2026-04-17 01:07:43 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:43.547201 | orchestrator | 2026-04-17 01:07:43 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:43.549731 | orchestrator | 2026-04-17 01:07:43 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:43.552182 | orchestrator | 2026-04-17 01:07:43 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:43.552245 | orchestrator | 2026-04-17 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:46.593362 | orchestrator | 2026-04-17 01:07:46 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:46.598428 | orchestrator | 2026-04-17 01:07:46 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:46.606335 | orchestrator | 2026-04-17 01:07:46 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:46.612073 | orchestrator | 2026-04-17 01:07:46 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:46.612977 | orchestrator | 2026-04-17 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:49.663666 | orchestrator | 2026-04-17 01:07:49 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:49.665028 | orchestrator | 2026-04-17 01:07:49 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:49.667292 | orchestrator | 2026-04-17 01:07:49 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:49.669044 | orchestrator | 2026-04-17 01:07:49 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:49.669071 | orchestrator | 2026-04-17 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:52.709289 | orchestrator | 2026-04-17 01:07:52 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:52.709526 | orchestrator | 2026-04-17 01:07:52 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:52.712581 | orchestrator | 2026-04-17 01:07:52 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:52.713244 | orchestrator | 2026-04-17 01:07:52 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:52.713268 | orchestrator | 2026-04-17 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:55.744155 | orchestrator | 2026-04-17 01:07:55 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:55.744721 | orchestrator | 2026-04-17 01:07:55 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:55.745606 | orchestrator | 2026-04-17 01:07:55 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:55.746526 | orchestrator | 2026-04-17 01:07:55 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:55.746584 | orchestrator | 2026-04-17 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:07:58.770363 | orchestrator | 2026-04-17 01:07:58 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:07:58.770439 | orchestrator | 2026-04-17 01:07:58 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:07:58.771261 | orchestrator | 2026-04-17 01:07:58 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:07:58.771879 | orchestrator | 2026-04-17 01:07:58 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state STARTED 2026-04-17 01:07:58.771919 | orchestrator | 2026-04-17 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:01.806762 | orchestrator | 2026-04-17 01:08:01 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:01.808512 | orchestrator | 2026-04-17 01:08:01 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:01.809657 | orchestrator | 2026-04-17 01:08:01 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:01.812841 | orchestrator | 2026-04-17 01:08:01 | INFO  | Task 2f97f107-a975-4ffb-80cc-05d90dadc869 is in state SUCCESS 2026-04-17 01:08:01.814353 | orchestrator | 2026-04-17 01:08:01.814402 | orchestrator | 2026-04-17 01:08:01.814410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:08:01.814416 | orchestrator | 2026-04-17 01:08:01.814421 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:08:01.814426 | orchestrator | Friday 17 April 2026 01:06:37 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-04-17 01:08:01.814431 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:01.814436 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:01.814441 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:01.814445 | orchestrator | 2026-04-17 01:08:01.814450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:08:01.814455 | orchestrator | Friday 17 April 2026 01:06:38 +0000 (0:00:00.345) 0:00:00.526 ********** 2026-04-17 01:08:01.814460 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-17 01:08:01.814465 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-17 01:08:01.814470 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-17 01:08:01.814474 | orchestrator | 2026-04-17 01:08:01.814479 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-17 01:08:01.814483 | orchestrator | 2026-04-17 01:08:01.814488 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-17 01:08:01.814492 | orchestrator | Friday 17 April 2026 01:06:38 +0000 (0:00:00.525) 0:00:01.052 ********** 2026-04-17 01:08:01.814497 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:01.814501 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:01.814506 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:01.814511 | orchestrator | 2026-04-17 01:08:01.814516 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:08:01.814521 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:08:01.814527 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:08:01.814532 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:08:01.814536 | orchestrator | 2026-04-17 01:08:01.814559 | orchestrator | 2026-04-17 01:08:01.814564 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:08:01.814568 | orchestrator | Friday 17 April 2026 01:06:39 +0000 (0:00:01.215) 0:00:02.268 ********** 2026-04-17 01:08:01.814573 | orchestrator | =============================================================================== 2026-04-17 01:08:01.814578 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.22s 2026-04-17 01:08:01.814582 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-17 01:08:01.814587 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-17 01:08:01.814591 | orchestrator | 2026-04-17 01:08:01.814596 | orchestrator | 2026-04-17 01:08:01.814600 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:08:01.814605 | orchestrator | 2026-04-17 01:08:01.814609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:08:01.814614 | orchestrator | Friday 17 April 2026 01:06:11 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-04-17 01:08:01.814618 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:01.814695 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:01.814702 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:01.814707 | orchestrator | 2026-04-17 01:08:01.814711 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:08:01.814716 | orchestrator | Friday 17 April 2026 01:06:11 +0000 (0:00:00.268) 0:00:00.589 ********** 2026-04-17 01:08:01.814720 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-17 01:08:01.814725 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-17 01:08:01.814730 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-17 01:08:01.814734 | orchestrator | 2026-04-17 01:08:01.814739 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-17 01:08:01.814743 | orchestrator | 2026-04-17 01:08:01.814748 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 01:08:01.814752 | orchestrator | Friday 17 April 2026 01:06:11 +0000 (0:00:00.304) 0:00:00.893 ********** 2026-04-17 01:08:01.814757 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:01.814761 | orchestrator | 2026-04-17 01:08:01.814766 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-17 01:08:01.814770 | orchestrator | Friday 17 April 2026 01:06:12 +0000 (0:00:00.613) 0:00:01.507 ********** 2026-04-17 01:08:01.814776 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-17 01:08:01.814780 | orchestrator | 2026-04-17 01:08:01.814785 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-17 01:08:01.814789 | orchestrator | Friday 17 April 2026 01:06:16 +0000 (0:00:04.293) 0:00:05.800 ********** 2026-04-17 01:08:01.814794 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-17 01:08:01.814799 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-17 01:08:01.814803 | orchestrator | 2026-04-17 01:08:01.814808 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-17 01:08:01.814812 | orchestrator | Friday 17 April 2026 01:06:24 +0000 (0:00:07.505) 0:00:13.306 ********** 2026-04-17 01:08:01.814817 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:08:01.814822 | orchestrator | 2026-04-17 01:08:01.814826 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-17 01:08:01.814831 | orchestrator | Friday 17 April 2026 01:06:27 +0000 (0:00:02.852) 0:00:16.159 ********** 2026-04-17 01:08:01.814846 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-17 01:08:01.814852 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:08:01.814857 | orchestrator | 2026-04-17 01:08:01.814861 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-17 01:08:01.814866 | orchestrator | Friday 17 April 2026 01:06:31 +0000 (0:00:03.955) 0:00:20.114 ********** 2026-04-17 01:08:01.814871 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:08:01.814875 | orchestrator | 2026-04-17 01:08:01.814880 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-17 01:08:01.814885 | orchestrator | Friday 17 April 2026 01:06:34 +0000 (0:00:03.930) 0:00:24.044 ********** 2026-04-17 01:08:01.814889 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-17 01:08:01.814894 | orchestrator | 2026-04-17 01:08:01.814898 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-17 01:08:01.814903 | orchestrator | Friday 17 April 2026 01:06:39 +0000 (0:00:04.402) 0:00:28.447 ********** 2026-04-17 01:08:01.814907 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.814912 | orchestrator | 2026-04-17 01:08:01.814916 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-17 01:08:01.814921 | orchestrator | Friday 17 April 2026 01:06:42 +0000 (0:00:03.223) 0:00:31.670 ********** 2026-04-17 01:08:01.814925 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.814934 | orchestrator | 2026-04-17 01:08:01.814939 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-17 01:08:01.814945 | orchestrator | Friday 17 April 2026 01:06:45 +0000 (0:00:03.329) 0:00:35.000 ********** 2026-04-17 01:08:01.814950 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.814956 | orchestrator | 2026-04-17 01:08:01.814961 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-17 01:08:01.814967 | orchestrator | Friday 17 April 2026 01:06:49 +0000 (0:00:03.533) 0:00:38.533 ********** 2026-04-17 01:08:01.814975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.814984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.814990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.815024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815037 | orchestrator | 2026-04-17 01:08:01.815042 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-17 01:08:01.815047 | orchestrator | Friday 17 April 2026 01:06:51 +0000 (0:00:01.872) 0:00:40.405 ********** 2026-04-17 01:08:01.815053 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.815059 | orchestrator | 2026-04-17 01:08:01.815064 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-17 01:08:01.815069 | orchestrator | Friday 17 April 2026 01:06:51 +0000 (0:00:00.103) 0:00:40.509 ********** 2026-04-17 01:08:01.815075 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.815080 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:01.815085 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:01.815090 | orchestrator | 2026-04-17 01:08:01.815096 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-17 01:08:01.815101 | orchestrator | Friday 17 April 2026 01:06:51 +0000 (0:00:00.267) 0:00:40.776 ********** 2026-04-17 01:08:01.815107 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:08:01.815112 | orchestrator | 2026-04-17 01:08:01.815118 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-17 01:08:01.815123 | orchestrator | Friday 17 April 2026 01:06:52 +0000 (0:00:00.842) 0:00:41.619 ********** 2026-04-17 01:08:01.815128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.815139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.815150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.815156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.815173 | orchestrator | 2026-04-17 01:08:01.815178 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-17 01:08:01.815183 | orchestrator | Friday 17 April 2026 01:06:55 +0000 (0:00:02.481) 0:00:44.101 ********** 2026-04-17 01:08:01.816464 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:01.816481 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:01.816487 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:01.816492 | orchestrator | 2026-04-17 01:08:01.816498 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 01:08:01.816514 | orchestrator | Friday 17 April 2026 01:06:55 +0000 (0:00:00.425) 0:00:44.526 ********** 2026-04-17 01:08:01.816519 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:01.816524 | orchestrator | 2026-04-17 01:08:01.816529 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-17 01:08:01.816534 | orchestrator | Friday 17 April 2026 01:06:55 +0000 (0:00:00.524) 0:00:45.051 ********** 2026-04-17 01:08:01.816540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.816546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.816551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.816557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.816575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.816580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.816585 | orchestrator | 2026-04-17 01:08:01.816590 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-17 01:08:01.816594 | orchestrator | Friday 17 April 2026 01:06:58 +0000 (0:00:02.087) 0:00:47.138 ********** 2026-04-17 01:08:01.816599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816609 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.816614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816633 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:01.816638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816647 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:01.816652 | orchestrator | 2026-04-17 01:08:01.816657 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-17 01:08:01.816661 | orchestrator | Friday 17 April 2026 01:06:58 +0000 (0:00:00.768) 0:00:47.906 ********** 2026-04-17 01:08:01.816666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816691 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.816700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816710 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:01.816714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.816719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.816727 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:01.816732 | orchestrator | 2026-04-17 01:08:01.816736 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-17 01:08:01.816741 | orchestrator | Friday 17 April 2026 01:06:59 +0000 (0:00:00.744) 0:00:48.651 ********** 2026-04-17 01:08:01.816983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.816993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.816998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817027 | orchestrator | 2026-04-17 01:08:01.817032 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-17 01:08:01.817037 | orchestrator | Friday 17 April 2026 01:07:01 +0000 (0:00:02.293) 0:00:50.945 ********** 2026-04-17 01:08:01.817042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.817080 | orchestrator | 2026-04-17 01:08:01.817084 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-17 01:08:01.817089 | orchestrator | Friday 17 April 2026 01:07:07 +0000 (0:00:05.513) 0:00:56.459 ********** 2026-04-17 01:08:01.817094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.817113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.817118 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:01.817122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.817132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.817137 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.817142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-17 01:08:01.817147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:01.817163 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:01.817168 | orchestrator | 2026-04-17 01:08:01.817172 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-17 01:08:01.817177 | orchestrator | Friday 17 April 2026 01:07:08 +0000 (0:00:00.933) 0:00:57.392 ********** 2026-04-17 01:08:01.817182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-17 01:08:01.817200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.818517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.818537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:01.818543 | orchestrator | 2026-04-17 01:08:01.818549 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-17 01:08:01.818554 | orchestrator | Friday 17 April 2026 01:07:10 +0000 (0:00:02.343) 0:00:59.736 ********** 2026-04-17 01:08:01.818558 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:01.818564 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:01.818568 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:01.818572 | orchestrator | 2026-04-17 01:08:01.818576 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-17 01:08:01.818580 | orchestrator | Friday 17 April 2026 01:07:10 +0000 (0:00:00.231) 0:00:59.968 ********** 2026-04-17 01:08:01.818584 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.818588 | orchestrator | 2026-04-17 01:08:01.818592 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-17 01:08:01.818596 | orchestrator | Friday 17 April 2026 01:07:13 +0000 (0:00:02.530) 0:01:02.499 ********** 2026-04-17 01:08:01.818600 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.818604 | orchestrator | 2026-04-17 01:08:01.818608 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-17 01:08:01.818613 | orchestrator | Friday 17 April 2026 01:07:16 +0000 (0:00:02.922) 0:01:05.421 ********** 2026-04-17 01:08:01.818627 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.818632 | orchestrator | 2026-04-17 01:08:01.818636 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 01:08:01.818640 | orchestrator | Friday 17 April 2026 01:07:31 +0000 (0:00:15.561) 0:01:20.983 ********** 2026-04-17 01:08:01.818644 | orchestrator | 2026-04-17 01:08:01.818648 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 01:08:01.818652 | orchestrator | Friday 17 April 2026 01:07:32 +0000 (0:00:00.235) 0:01:21.218 ********** 2026-04-17 01:08:01.818656 | orchestrator | 2026-04-17 01:08:01.818660 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-17 01:08:01.818664 | orchestrator | Friday 17 April 2026 01:07:32 +0000 (0:00:00.111) 0:01:21.329 ********** 2026-04-17 01:08:01.818668 | orchestrator | 2026-04-17 01:08:01.818672 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-17 01:08:01.818677 | orchestrator | Friday 17 April 2026 01:07:32 +0000 (0:00:00.069) 0:01:21.399 ********** 2026-04-17 01:08:01.818681 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.818700 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:01.818704 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:01.818708 | orchestrator | 2026-04-17 01:08:01.818712 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-17 01:08:01.818717 | orchestrator | Friday 17 April 2026 01:07:46 +0000 (0:00:13.861) 0:01:35.261 ********** 2026-04-17 01:08:01.818721 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:01.818725 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:01.818729 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:01.818733 | orchestrator | 2026-04-17 01:08:01.818737 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:08:01.818742 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-17 01:08:01.818747 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:08:01.818751 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:08:01.818755 | orchestrator | 2026-04-17 01:08:01.818759 | orchestrator | 2026-04-17 01:08:01.818764 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:08:01.818768 | orchestrator | Friday 17 April 2026 01:08:00 +0000 (0:00:13.884) 0:01:49.145 ********** 2026-04-17 01:08:01.818772 | orchestrator | =============================================================================== 2026-04-17 01:08:01.818776 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.56s 2026-04-17 01:08:01.818780 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.88s 2026-04-17 01:08:01.818784 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.86s 2026-04-17 01:08:01.818788 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.51s 2026-04-17 01:08:01.818792 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.51s 2026-04-17 01:08:01.818796 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.40s 2026-04-17 01:08:01.818800 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.29s 2026-04-17 01:08:01.818804 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.96s 2026-04-17 01:08:01.818808 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.93s 2026-04-17 01:08:01.818812 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.53s 2026-04-17 01:08:01.818816 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.33s 2026-04-17 01:08:01.818820 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.22s 2026-04-17 01:08:01.818824 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.92s 2026-04-17 01:08:01.818828 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.85s 2026-04-17 01:08:01.818832 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.53s 2026-04-17 01:08:01.818836 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.48s 2026-04-17 01:08:01.818840 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.34s 2026-04-17 01:08:01.818844 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.29s 2026-04-17 01:08:01.818848 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.09s 2026-04-17 01:08:01.818853 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.87s 2026-04-17 01:08:01.818857 | orchestrator | 2026-04-17 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:04.861741 | orchestrator | 2026-04-17 01:08:04 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:04.863964 | orchestrator | 2026-04-17 01:08:04 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:04.865427 | orchestrator | 2026-04-17 01:08:04 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:04.866157 | orchestrator | 2026-04-17 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:07.906957 | orchestrator | 2026-04-17 01:08:07 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:07.909471 | orchestrator | 2026-04-17 01:08:07 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:07.911593 | orchestrator | 2026-04-17 01:08:07 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:07.911641 | orchestrator | 2026-04-17 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:10.947541 | orchestrator | 2026-04-17 01:08:10 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:10.949660 | orchestrator | 2026-04-17 01:08:10 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:10.951553 | orchestrator | 2026-04-17 01:08:10 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:10.951628 | orchestrator | 2026-04-17 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:13.992858 | orchestrator | 2026-04-17 01:08:13 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:13.994965 | orchestrator | 2026-04-17 01:08:13 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:13.995637 | orchestrator | 2026-04-17 01:08:13 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:13.995726 | orchestrator | 2026-04-17 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:17.034402 | orchestrator | 2026-04-17 01:08:17 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:17.035874 | orchestrator | 2026-04-17 01:08:17 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:17.037758 | orchestrator | 2026-04-17 01:08:17 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:17.037797 | orchestrator | 2026-04-17 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:20.082683 | orchestrator | 2026-04-17 01:08:20 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:20.085059 | orchestrator | 2026-04-17 01:08:20 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:20.087178 | orchestrator | 2026-04-17 01:08:20 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:20.087259 | orchestrator | 2026-04-17 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:23.124905 | orchestrator | 2026-04-17 01:08:23 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:23.126078 | orchestrator | 2026-04-17 01:08:23 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:23.127116 | orchestrator | 2026-04-17 01:08:23 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:23.127165 | orchestrator | 2026-04-17 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:26.162518 | orchestrator | 2026-04-17 01:08:26 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state STARTED 2026-04-17 01:08:26.164126 | orchestrator | 2026-04-17 01:08:26 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:26.166168 | orchestrator | 2026-04-17 01:08:26 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:26.166250 | orchestrator | 2026-04-17 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:29.202785 | orchestrator | 2026-04-17 01:08:29 | INFO  | Task e650a3c2-4c8f-4816-8fe3-ed7abe1c0fd0 is in state SUCCESS 2026-04-17 01:08:29.204719 | orchestrator | 2026-04-17 01:08:29.204797 | orchestrator | 2026-04-17 01:08:29.204809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:08:29.204818 | orchestrator | 2026-04-17 01:08:29.204826 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:08:29.204834 | orchestrator | Friday 17 April 2026 01:06:11 +0000 (0:00:00.307) 0:00:00.307 ********** 2026-04-17 01:08:29.204841 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:29.204850 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:29.204857 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:29.204864 | orchestrator | 2026-04-17 01:08:29.204870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:08:29.204875 | orchestrator | Friday 17 April 2026 01:06:12 +0000 (0:00:00.259) 0:00:00.566 ********** 2026-04-17 01:08:29.204881 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-17 01:08:29.204887 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-17 01:08:29.204893 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-17 01:08:29.204900 | orchestrator | 2026-04-17 01:08:29.204906 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-17 01:08:29.204912 | orchestrator | 2026-04-17 01:08:29.204920 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 01:08:29.204926 | orchestrator | Friday 17 April 2026 01:06:12 +0000 (0:00:00.272) 0:00:00.839 ********** 2026-04-17 01:08:29.204932 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:29.204940 | orchestrator | 2026-04-17 01:08:29.204947 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-17 01:08:29.204954 | orchestrator | Friday 17 April 2026 01:06:12 +0000 (0:00:00.582) 0:00:01.422 ********** 2026-04-17 01:08:29.204965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.204976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.204984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.205369 | orchestrator | 2026-04-17 01:08:29.205391 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-17 01:08:29.205400 | orchestrator | Friday 17 April 2026 01:06:13 +0000 (0:00:01.045) 0:00:02.467 ********** 2026-04-17 01:08:29.205407 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-17 01:08:29.205415 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-17 01:08:29.205423 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:08:29.205430 | orchestrator | 2026-04-17 01:08:29.205438 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-17 01:08:29.205446 | orchestrator | Friday 17 April 2026 01:06:14 +0000 (0:00:00.884) 0:00:03.352 ********** 2026-04-17 01:08:29.205453 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:29.205461 | orchestrator | 2026-04-17 01:08:29.205467 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-17 01:08:29.205474 | orchestrator | Friday 17 April 2026 01:06:15 +0000 (0:00:00.570) 0:00:03.922 ********** 2026-04-17 01:08:29.205495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.205504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.205512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.205518 | orchestrator | 2026-04-17 01:08:29.205525 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-17 01:08:29.205532 | orchestrator | Friday 17 April 2026 01:06:17 +0000 (0:00:01.717) 0:00:05.640 ********** 2026-04-17 01:08:29.205540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205557 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.205564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205572 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.205586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205594 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.205601 | orchestrator | 2026-04-17 01:08:29.205609 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-17 01:08:29.205616 | orchestrator | Friday 17 April 2026 01:06:17 +0000 (0:00:00.384) 0:00:06.025 ********** 2026-04-17 01:08:29.205624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205632 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.205640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205686 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.205695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-17 01:08:29.205711 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.205719 | orchestrator | 2026-04-17 01:08:29.205726 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-17 01:08:29.206096 | orchestrator | Friday 17 April 2026 01:06:18 +0000 (0:00:00.577) 0:00:06.602 ********** 2026-04-17 01:08:29.206116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206173 | orchestrator | 2026-04-17 01:08:29.206180 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-17 01:08:29.206188 | orchestrator | Friday 17 April 2026 01:06:19 +0000 (0:00:01.496) 0:00:08.098 ********** 2026-04-17 01:08:29.206194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.206255 | orchestrator | 2026-04-17 01:08:29.206262 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-17 01:08:29.206269 | orchestrator | Friday 17 April 2026 01:06:21 +0000 (0:00:01.491) 0:00:09.590 ********** 2026-04-17 01:08:29.206276 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.206283 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.206290 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.206297 | orchestrator | 2026-04-17 01:08:29.206304 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-17 01:08:29.206311 | orchestrator | Friday 17 April 2026 01:06:21 +0000 (0:00:00.249) 0:00:09.839 ********** 2026-04-17 01:08:29.206318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 01:08:29.206327 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 01:08:29.206335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-17 01:08:29.206342 | orchestrator | 2026-04-17 01:08:29.206349 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-17 01:08:29.206355 | orchestrator | Friday 17 April 2026 01:06:22 +0000 (0:00:01.166) 0:00:11.006 ********** 2026-04-17 01:08:29.206363 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 01:08:29.206370 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 01:08:29.206377 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-17 01:08:29.206383 | orchestrator | 2026-04-17 01:08:29.206389 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-17 01:08:29.206396 | orchestrator | Friday 17 April 2026 01:06:23 +0000 (0:00:01.223) 0:00:12.229 ********** 2026-04-17 01:08:29.206431 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:08:29.206439 | orchestrator | 2026-04-17 01:08:29.206446 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-17 01:08:29.206452 | orchestrator | Friday 17 April 2026 01:06:24 +0000 (0:00:00.901) 0:00:13.131 ********** 2026-04-17 01:08:29.206459 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-17 01:08:29.206466 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-17 01:08:29.206496 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:29.206504 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:29.206510 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:29.206516 | orchestrator | 2026-04-17 01:08:29.206522 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-17 01:08:29.206573 | orchestrator | Friday 17 April 2026 01:06:25 +0000 (0:00:00.735) 0:00:13.867 ********** 2026-04-17 01:08:29.206582 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.206588 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.206594 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.206601 | orchestrator | 2026-04-17 01:08:29.206608 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-17 01:08:29.206614 | orchestrator | Friday 17 April 2026 01:06:25 +0000 (0:00:00.315) 0:00:14.182 ********** 2026-04-17 01:08:29.206622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085300, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2659028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085300, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2659028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085300, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2659028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085329, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085329, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085329, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2747633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085400, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085400, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085400, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085324, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085324, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085324, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085402, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085402, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085402, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085305, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.268903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085305, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.268903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085305, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.268903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085354, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2791367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085354, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2791367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085354, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2791367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085392, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2859032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085392, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2859032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085392, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2859032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085299, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2637546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085299, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2637546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085299, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2637546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085302, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.266903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085302, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.266903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.206996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085302, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.266903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085327, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.272903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085327, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.272903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085327, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.272903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085382, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2839031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085382, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2839031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085382, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2839031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085398, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085398, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085398, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2879033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085318, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085318, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085318, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.271903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085389, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2855577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085389, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2855577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085389, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2855577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085406, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085406, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085406, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2899032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085357, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2829032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085357, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2829032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085357, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2829032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085344, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2784321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085344, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2784321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085344, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2784321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085340, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.275903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085340, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.275903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085340, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.275903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085385, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2849033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085385, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2849033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085385, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2849033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085339, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.274903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085339, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.274903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085339, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.274903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085395, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2869031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085395, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2869031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085395, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2869031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085310, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.269903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085310, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.269903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085310, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.269903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1085609, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3329039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1085609, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3329039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1085609, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3329039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085462, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.308324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085462, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.308324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085462, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.308324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085421, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2935975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085421, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2935975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085421, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2935975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1085485, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3109035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1085485, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3109035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1085485, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3109035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085408, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2909033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085408, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2909033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085408, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2909033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1085521, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3202763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1085521, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3202763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1085521, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3202763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1085488, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3165262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1085488, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3165262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1085488, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3165262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1085531, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.324904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1085531, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.324904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1085531, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.324904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1085603, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3316243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1085603, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3316243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1085603, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3316243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1085519, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1085519, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1085519, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1085479, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3099036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1085479, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3099036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1085479, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3099036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085457, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3029034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085457, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3029034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085457, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3029034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1085476, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3089037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1085476, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3089037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1085476, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3089037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.207982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085423, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3009036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085423, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3009036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085423, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3009036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1085483, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.310494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1085483, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.310494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1085483, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.310494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1085587, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3310552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1085587, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3310552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1085587, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3310552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1085578, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3285534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1085578, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3285534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1085578, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3285534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085410, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2919033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085410, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2919033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085410, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2919033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085417, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2929034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085417, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2929034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085417, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.2929034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1085512, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1085512, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1085512, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.3179038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1085567, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.325904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1085567, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.325904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1085567, 'dev': 79, 'nlink': 1, 'atime': 1776384148.0, 'mtime': 1776384148.0, 'ctime': 1776385094.325904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-17 01:08:29.208270 | orchestrator | 2026-04-17 01:08:29.208279 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-17 01:08:29.208290 | orchestrator | Friday 17 April 2026 01:07:05 +0000 (0:00:39.833) 0:00:54.016 ********** 2026-04-17 01:08:29.208300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.208310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.208319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-17 01:08:29.208328 | orchestrator | 2026-04-17 01:08:29.208338 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-17 01:08:29.208346 | orchestrator | Friday 17 April 2026 01:07:06 +0000 (0:00:01.497) 0:00:55.514 ********** 2026-04-17 01:08:29.208356 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:29.208365 | orchestrator | 2026-04-17 01:08:29.208374 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-17 01:08:29.208383 | orchestrator | Friday 17 April 2026 01:07:09 +0000 (0:00:02.683) 0:00:58.198 ********** 2026-04-17 01:08:29.208391 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:29.208400 | orchestrator | 2026-04-17 01:08:29.208409 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 01:08:29.208417 | orchestrator | Friday 17 April 2026 01:07:12 +0000 (0:00:02.685) 0:01:00.883 ********** 2026-04-17 01:08:29.208425 | orchestrator | 2026-04-17 01:08:29.208434 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 01:08:29.208442 | orchestrator | Friday 17 April 2026 01:07:12 +0000 (0:00:00.073) 0:01:00.957 ********** 2026-04-17 01:08:29.208451 | orchestrator | 2026-04-17 01:08:29.208460 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-17 01:08:29.208469 | orchestrator | Friday 17 April 2026 01:07:12 +0000 (0:00:00.067) 0:01:01.025 ********** 2026-04-17 01:08:29.208477 | orchestrator | 2026-04-17 01:08:29.208486 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-17 01:08:29.208495 | orchestrator | Friday 17 April 2026 01:07:12 +0000 (0:00:00.072) 0:01:01.097 ********** 2026-04-17 01:08:29.208503 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.208518 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.208526 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:29.208535 | orchestrator | 2026-04-17 01:08:29.208544 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-17 01:08:29.208557 | orchestrator | Friday 17 April 2026 01:07:14 +0000 (0:00:02.221) 0:01:03.319 ********** 2026-04-17 01:08:29.208567 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.208575 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.208583 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-17 01:08:29.208594 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-17 01:08:29.208603 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:29.208612 | orchestrator | 2026-04-17 01:08:29.208620 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-17 01:08:29.208629 | orchestrator | Friday 17 April 2026 01:07:42 +0000 (0:00:27.313) 0:01:30.632 ********** 2026-04-17 01:08:29.208638 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.208646 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:29.208655 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:29.208663 | orchestrator | 2026-04-17 01:08:29.208671 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-17 01:08:29.210125 | orchestrator | Friday 17 April 2026 01:08:22 +0000 (0:00:40.235) 0:02:10.868 ********** 2026-04-17 01:08:29.210180 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:29.210188 | orchestrator | 2026-04-17 01:08:29.210195 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-17 01:08:29.210201 | orchestrator | Friday 17 April 2026 01:08:24 +0000 (0:00:02.515) 0:02:13.383 ********** 2026-04-17 01:08:29.210208 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.210270 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:29.210277 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:29.210283 | orchestrator | 2026-04-17 01:08:29.210290 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-17 01:08:29.210296 | orchestrator | Friday 17 April 2026 01:08:25 +0000 (0:00:00.245) 0:02:13.628 ********** 2026-04-17 01:08:29.210305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-17 01:08:29.210314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-17 01:08:29.210321 | orchestrator | 2026-04-17 01:08:29.210327 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-17 01:08:29.210332 | orchestrator | Friday 17 April 2026 01:08:28 +0000 (0:00:03.049) 0:02:16.677 ********** 2026-04-17 01:08:29.210338 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:29.210344 | orchestrator | 2026-04-17 01:08:29.210350 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:08:29.210356 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:08:29.210364 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:08:29.210369 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:08:29.210388 | orchestrator | 2026-04-17 01:08:29.210394 | orchestrator | 2026-04-17 01:08:29.210400 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:08:29.210405 | orchestrator | Friday 17 April 2026 01:08:28 +0000 (0:00:00.252) 0:02:16.930 ********** 2026-04-17 01:08:29.210410 | orchestrator | =============================================================================== 2026-04-17 01:08:29.210416 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 40.24s 2026-04-17 01:08:29.210422 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.83s 2026-04-17 01:08:29.210427 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.31s 2026-04-17 01:08:29.210432 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 3.05s 2026-04-17 01:08:29.210438 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.68s 2026-04-17 01:08:29.210443 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.68s 2026-04-17 01:08:29.210449 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.52s 2026-04-17 01:08:29.210455 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.22s 2026-04-17 01:08:29.210460 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.72s 2026-04-17 01:08:29.210466 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.50s 2026-04-17 01:08:29.210472 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.50s 2026-04-17 01:08:29.210477 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2026-04-17 01:08:29.210482 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.22s 2026-04-17 01:08:29.210505 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.17s 2026-04-17 01:08:29.210511 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.05s 2026-04-17 01:08:29.210517 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.90s 2026-04-17 01:08:29.210531 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2026-04-17 01:08:29.210536 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-04-17 01:08:29.210542 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.58s 2026-04-17 01:08:29.210548 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.58s 2026-04-17 01:08:29.210555 | orchestrator | 2026-04-17 01:08:29 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:29.210567 | orchestrator | 2026-04-17 01:08:29 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:29.210583 | orchestrator | 2026-04-17 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:32.255805 | orchestrator | 2026-04-17 01:08:32 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:32.258332 | orchestrator | 2026-04-17 01:08:32 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:32.258406 | orchestrator | 2026-04-17 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:35.309465 | orchestrator | 2026-04-17 01:08:35 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state STARTED 2026-04-17 01:08:35.311371 | orchestrator | 2026-04-17 01:08:35 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:35.311437 | orchestrator | 2026-04-17 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:38.362999 | orchestrator | 2026-04-17 01:08:38.363110 | orchestrator | 2026-04-17 01:08:38.363128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:08:38.363354 | orchestrator | 2026-04-17 01:08:38.363385 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-17 01:08:38.363425 | orchestrator | Friday 17 April 2026 00:59:55 +0000 (0:00:00.261) 0:00:00.261 ********** 2026-04-17 01:08:38.363434 | orchestrator | changed: [testbed-manager] 2026-04-17 01:08:38.363443 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.363450 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.363457 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.363464 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.363472 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.363479 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.363486 | orchestrator | 2026-04-17 01:08:38.363494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:08:38.363501 | orchestrator | Friday 17 April 2026 00:59:56 +0000 (0:00:00.724) 0:00:00.986 ********** 2026-04-17 01:08:38.363508 | orchestrator | changed: [testbed-manager] 2026-04-17 01:08:38.363516 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.363992 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.364003 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.364011 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.364018 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.364025 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.364032 | orchestrator | 2026-04-17 01:08:38.364040 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:08:38.364047 | orchestrator | Friday 17 April 2026 00:59:57 +0000 (0:00:00.676) 0:00:01.662 ********** 2026-04-17 01:08:38.364130 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-17 01:08:38.364345 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-17 01:08:38.364359 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-17 01:08:38.364366 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-17 01:08:38.364373 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-17 01:08:38.364381 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-17 01:08:38.364388 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-17 01:08:38.364395 | orchestrator | 2026-04-17 01:08:38.364402 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-17 01:08:38.364704 | orchestrator | 2026-04-17 01:08:38.364719 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 01:08:38.364727 | orchestrator | Friday 17 April 2026 00:59:57 +0000 (0:00:00.576) 0:00:02.239 ********** 2026-04-17 01:08:38.364735 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.364743 | orchestrator | 2026-04-17 01:08:38.364750 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-17 01:08:38.364757 | orchestrator | Friday 17 April 2026 00:59:58 +0000 (0:00:00.580) 0:00:02.819 ********** 2026-04-17 01:08:38.364765 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-17 01:08:38.364773 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-17 01:08:38.364781 | orchestrator | 2026-04-17 01:08:38.364788 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-17 01:08:38.364795 | orchestrator | Friday 17 April 2026 01:00:03 +0000 (0:00:05.051) 0:00:07.871 ********** 2026-04-17 01:08:38.364803 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 01:08:38.364814 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-17 01:08:38.364829 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.364844 | orchestrator | 2026-04-17 01:08:38.364857 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 01:08:38.364868 | orchestrator | Friday 17 April 2026 01:00:07 +0000 (0:00:04.592) 0:00:12.464 ********** 2026-04-17 01:08:38.364879 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.364890 | orchestrator | 2026-04-17 01:08:38.364902 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-17 01:08:38.364934 | orchestrator | Friday 17 April 2026 01:00:08 +0000 (0:00:00.622) 0:00:13.087 ********** 2026-04-17 01:08:38.364947 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.364958 | orchestrator | 2026-04-17 01:08:38.364968 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-17 01:08:38.364979 | orchestrator | Friday 17 April 2026 01:00:10 +0000 (0:00:01.485) 0:00:14.573 ********** 2026-04-17 01:08:38.364991 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.365001 | orchestrator | 2026-04-17 01:08:38.365013 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 01:08:38.365024 | orchestrator | Friday 17 April 2026 01:00:12 +0000 (0:00:02.779) 0:00:17.352 ********** 2026-04-17 01:08:38.365037 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.365048 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.365059 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.365070 | orchestrator | 2026-04-17 01:08:38.365641 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 01:08:38.365683 | orchestrator | Friday 17 April 2026 01:00:13 +0000 (0:00:00.546) 0:00:17.899 ********** 2026-04-17 01:08:38.365697 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.365710 | orchestrator | 2026-04-17 01:08:38.365723 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-17 01:08:38.365735 | orchestrator | Friday 17 April 2026 01:00:47 +0000 (0:00:34.613) 0:00:52.513 ********** 2026-04-17 01:08:38.365748 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.365761 | orchestrator | 2026-04-17 01:08:38.365773 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 01:08:38.365785 | orchestrator | Friday 17 April 2026 01:01:04 +0000 (0:00:16.885) 0:01:09.399 ********** 2026-04-17 01:08:38.365797 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.365810 | orchestrator | 2026-04-17 01:08:38.365822 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 01:08:38.365835 | orchestrator | Friday 17 April 2026 01:01:16 +0000 (0:00:11.730) 0:01:21.129 ********** 2026-04-17 01:08:38.365966 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.365983 | orchestrator | 2026-04-17 01:08:38.365996 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-17 01:08:38.366009 | orchestrator | Friday 17 April 2026 01:01:17 +0000 (0:00:01.159) 0:01:22.288 ********** 2026-04-17 01:08:38.366077 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.366091 | orchestrator | 2026-04-17 01:08:38.366118 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 01:08:38.366141 | orchestrator | Friday 17 April 2026 01:01:18 +0000 (0:00:00.686) 0:01:22.974 ********** 2026-04-17 01:08:38.366155 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.366168 | orchestrator | 2026-04-17 01:08:38.366180 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-17 01:08:38.366192 | orchestrator | Friday 17 April 2026 01:01:19 +0000 (0:00:00.600) 0:01:23.574 ********** 2026-04-17 01:08:38.366204 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.366250 | orchestrator | 2026-04-17 01:08:38.366261 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 01:08:38.366273 | orchestrator | Friday 17 April 2026 01:01:39 +0000 (0:00:20.598) 0:01:44.173 ********** 2026-04-17 01:08:38.366283 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.366294 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366306 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366317 | orchestrator | 2026-04-17 01:08:38.366330 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-17 01:08:38.366343 | orchestrator | 2026-04-17 01:08:38.366355 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-17 01:08:38.366367 | orchestrator | Friday 17 April 2026 01:01:39 +0000 (0:00:00.321) 0:01:44.495 ********** 2026-04-17 01:08:38.366379 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.366410 | orchestrator | 2026-04-17 01:08:38.366422 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-17 01:08:38.366434 | orchestrator | Friday 17 April 2026 01:01:40 +0000 (0:00:00.843) 0:01:45.339 ********** 2026-04-17 01:08:38.366445 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366458 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366469 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.366482 | orchestrator | 2026-04-17 01:08:38.366493 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-17 01:08:38.366506 | orchestrator | Friday 17 April 2026 01:01:43 +0000 (0:00:02.567) 0:01:47.906 ********** 2026-04-17 01:08:38.366518 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366530 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366542 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.366554 | orchestrator | 2026-04-17 01:08:38.366566 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 01:08:38.366579 | orchestrator | Friday 17 April 2026 01:01:45 +0000 (0:00:02.476) 0:01:50.382 ********** 2026-04-17 01:08:38.366592 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.366604 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366617 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366626 | orchestrator | 2026-04-17 01:08:38.366635 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 01:08:38.366643 | orchestrator | Friday 17 April 2026 01:01:46 +0000 (0:00:00.639) 0:01:51.022 ********** 2026-04-17 01:08:38.366652 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 01:08:38.366661 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366669 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 01:08:38.366678 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366690 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-17 01:08:38.366702 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-17 01:08:38.366714 | orchestrator | 2026-04-17 01:08:38.366726 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-17 01:08:38.366738 | orchestrator | Friday 17 April 2026 01:01:55 +0000 (0:00:09.150) 0:02:00.173 ********** 2026-04-17 01:08:38.366750 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.366762 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366774 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366784 | orchestrator | 2026-04-17 01:08:38.366796 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-17 01:08:38.366808 | orchestrator | Friday 17 April 2026 01:01:55 +0000 (0:00:00.284) 0:02:00.458 ********** 2026-04-17 01:08:38.366820 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-17 01:08:38.366831 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.366844 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-17 01:08:38.366856 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.366867 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-17 01:08:38.366880 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.366893 | orchestrator | 2026-04-17 01:08:38.366905 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 01:08:38.368356 | orchestrator | Friday 17 April 2026 01:01:56 +0000 (0:00:00.760) 0:02:01.218 ********** 2026-04-17 01:08:38.368403 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368411 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368418 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.368424 | orchestrator | 2026-04-17 01:08:38.368431 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-17 01:08:38.368438 | orchestrator | Friday 17 April 2026 01:01:57 +0000 (0:00:00.618) 0:02:01.837 ********** 2026-04-17 01:08:38.368445 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368452 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368471 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.368478 | orchestrator | 2026-04-17 01:08:38.368485 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-17 01:08:38.368491 | orchestrator | Friday 17 April 2026 01:01:58 +0000 (0:00:00.996) 0:02:02.834 ********** 2026-04-17 01:08:38.368498 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368505 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368647 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.368658 | orchestrator | 2026-04-17 01:08:38.368665 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-17 01:08:38.368672 | orchestrator | Friday 17 April 2026 01:02:00 +0000 (0:00:01.984) 0:02:04.818 ********** 2026-04-17 01:08:38.368678 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368685 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368691 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.368699 | orchestrator | 2026-04-17 01:08:38.368706 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 01:08:38.368712 | orchestrator | Friday 17 April 2026 01:02:22 +0000 (0:00:21.872) 0:02:26.690 ********** 2026-04-17 01:08:38.368719 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368726 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368732 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.368739 | orchestrator | 2026-04-17 01:08:38.368746 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 01:08:38.368752 | orchestrator | Friday 17 April 2026 01:02:36 +0000 (0:00:14.047) 0:02:40.738 ********** 2026-04-17 01:08:38.368759 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.368766 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368772 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368779 | orchestrator | 2026-04-17 01:08:38.368785 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-17 01:08:38.368792 | orchestrator | Friday 17 April 2026 01:02:37 +0000 (0:00:01.317) 0:02:42.055 ********** 2026-04-17 01:08:38.368799 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368809 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368821 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.368835 | orchestrator | 2026-04-17 01:08:38.368852 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-17 01:08:38.368863 | orchestrator | Friday 17 April 2026 01:02:52 +0000 (0:00:14.779) 0:02:56.835 ********** 2026-04-17 01:08:38.368875 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368886 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.368897 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368907 | orchestrator | 2026-04-17 01:08:38.368917 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-17 01:08:38.368927 | orchestrator | Friday 17 April 2026 01:02:54 +0000 (0:00:02.428) 0:02:59.263 ********** 2026-04-17 01:08:38.368939 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.368948 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.368958 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.368968 | orchestrator | 2026-04-17 01:08:38.368978 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-17 01:08:38.368989 | orchestrator | 2026-04-17 01:08:38.368999 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 01:08:38.369010 | orchestrator | Friday 17 April 2026 01:02:54 +0000 (0:00:00.271) 0:02:59.535 ********** 2026-04-17 01:08:38.369020 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.369033 | orchestrator | 2026-04-17 01:08:38.369045 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-17 01:08:38.369055 | orchestrator | Friday 17 April 2026 01:02:55 +0000 (0:00:00.936) 0:03:00.472 ********** 2026-04-17 01:08:38.369066 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-17 01:08:38.369091 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-17 01:08:38.369103 | orchestrator | 2026-04-17 01:08:38.369114 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-17 01:08:38.369164 | orchestrator | Friday 17 April 2026 01:02:59 +0000 (0:00:04.052) 0:03:04.525 ********** 2026-04-17 01:08:38.369175 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-17 01:08:38.369188 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-17 01:08:38.369199 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-17 01:08:38.369210 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-17 01:08:38.369288 | orchestrator | 2026-04-17 01:08:38.369298 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-17 01:08:38.369309 | orchestrator | Friday 17 April 2026 01:03:06 +0000 (0:00:06.439) 0:03:10.964 ********** 2026-04-17 01:08:38.369320 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:08:38.369331 | orchestrator | 2026-04-17 01:08:38.369343 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-17 01:08:38.369354 | orchestrator | Friday 17 April 2026 01:03:10 +0000 (0:00:03.718) 0:03:14.682 ********** 2026-04-17 01:08:38.369376 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-17 01:08:38.369388 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:08:38.369398 | orchestrator | 2026-04-17 01:08:38.369409 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-17 01:08:38.369421 | orchestrator | Friday 17 April 2026 01:03:14 +0000 (0:00:04.327) 0:03:19.010 ********** 2026-04-17 01:08:38.369432 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:08:38.369443 | orchestrator | 2026-04-17 01:08:38.369454 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-17 01:08:38.369465 | orchestrator | Friday 17 April 2026 01:03:18 +0000 (0:00:03.576) 0:03:22.587 ********** 2026-04-17 01:08:38.369476 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-17 01:08:38.369487 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-17 01:08:38.369497 | orchestrator | 2026-04-17 01:08:38.369508 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-17 01:08:38.369654 | orchestrator | Friday 17 April 2026 01:03:26 +0000 (0:00:08.045) 0:03:30.632 ********** 2026-04-17 01:08:38.369682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.369699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.369726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.369862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.369884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.369925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.369954 | orchestrator | 2026-04-17 01:08:38.369966 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-17 01:08:38.369978 | orchestrator | Friday 17 April 2026 01:03:28 +0000 (0:00:02.103) 0:03:32.736 ********** 2026-04-17 01:08:38.369988 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.370000 | orchestrator | 2026-04-17 01:08:38.370050 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-17 01:08:38.370061 | orchestrator | Friday 17 April 2026 01:03:28 +0000 (0:00:00.103) 0:03:32.839 ********** 2026-04-17 01:08:38.370068 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.370074 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.370081 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.370088 | orchestrator | 2026-04-17 01:08:38.370095 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-17 01:08:38.370101 | orchestrator | Friday 17 April 2026 01:03:28 +0000 (0:00:00.234) 0:03:33.074 ********** 2026-04-17 01:08:38.370108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-17 01:08:38.370141 | orchestrator | 2026-04-17 01:08:38.370148 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-17 01:08:38.370154 | orchestrator | Friday 17 April 2026 01:03:29 +0000 (0:00:00.954) 0:03:34.028 ********** 2026-04-17 01:08:38.370161 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.370168 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.370174 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.370181 | orchestrator | 2026-04-17 01:08:38.370188 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-17 01:08:38.370194 | orchestrator | Friday 17 April 2026 01:03:29 +0000 (0:00:00.497) 0:03:34.526 ********** 2026-04-17 01:08:38.370202 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.370209 | orchestrator | 2026-04-17 01:08:38.370240 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 01:08:38.370251 | orchestrator | Friday 17 April 2026 01:03:30 +0000 (0:00:00.716) 0:03:35.243 ********** 2026-04-17 01:08:38.370270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.370322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.370359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.370368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.370393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.370442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.370458 | orchestrator | 2026-04-17 01:08:38.370469 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 01:08:38.370479 | orchestrator | Friday 17 April 2026 01:03:33 +0000 (0:00:02.764) 0:03:38.007 ********** 2026-04-17 01:08:38.370501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.370514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.370525 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.370536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.372097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.372159 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.372274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.372308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.372319 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.372330 | orchestrator | 2026-04-17 01:08:38.372340 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 01:08:38.372351 | orchestrator | Friday 17 April 2026 01:03:34 +0000 (0:00:00.818) 0:03:38.825 ********** 2026-04-17 01:08:38.372361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.372381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.372391 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.372440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.372464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.372475 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.372487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.372500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.372512 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.372524 | orchestrator | 2026-04-17 01:08:38.372536 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-17 01:08:38.372565 | orchestrator | Friday 17 April 2026 01:03:35 +0000 (0:00:00.959) 0:03:39.785 ********** 2026-04-17 01:08:38.372609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374373 | orchestrator | 2026-04-17 01:08:38.374380 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-17 01:08:38.374388 | orchestrator | Friday 17 April 2026 01:03:38 +0000 (0:00:03.108) 0:03:42.894 ********** 2026-04-17 01:08:38.374395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374475 | orchestrator | 2026-04-17 01:08:38.374482 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-17 01:08:38.374488 | orchestrator | Friday 17 April 2026 01:03:46 +0000 (0:00:08.203) 0:03:51.097 ********** 2026-04-17 01:08:38.374498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.374545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.374553 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.374560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.374567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.374573 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.374580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-17 01:08:38.374613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.374626 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.374637 | orchestrator | 2026-04-17 01:08:38.374647 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-17 01:08:38.374658 | orchestrator | Friday 17 April 2026 01:03:47 +0000 (0:00:01.106) 0:03:52.203 ********** 2026-04-17 01:08:38.374667 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.374678 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.374689 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.374701 | orchestrator | 2026-04-17 01:08:38.374734 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-17 01:08:38.374741 | orchestrator | Friday 17 April 2026 01:03:50 +0000 (0:00:02.933) 0:03:55.137 ********** 2026-04-17 01:08:38.374746 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.374751 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.374757 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.374762 | orchestrator | 2026-04-17 01:08:38.374767 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-17 01:08:38.374773 | orchestrator | Friday 17 April 2026 01:03:50 +0000 (0:00:00.396) 0:03:55.534 ********** 2026-04-17 01:08:38.374779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-17 01:08:38.374837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.374855 | orchestrator | 2026-04-17 01:08:38.374860 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 01:08:38.374866 | orchestrator | Friday 17 April 2026 01:03:52 +0000 (0:00:01.982) 0:03:57.516 ********** 2026-04-17 01:08:38.374871 | orchestrator | 2026-04-17 01:08:38.374876 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 01:08:38.374882 | orchestrator | Friday 17 April 2026 01:03:53 +0000 (0:00:00.361) 0:03:57.878 ********** 2026-04-17 01:08:38.374893 | orchestrator | 2026-04-17 01:08:38.374898 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-17 01:08:38.374903 | orchestrator | Friday 17 April 2026 01:03:53 +0000 (0:00:00.300) 0:03:58.179 ********** 2026-04-17 01:08:38.374909 | orchestrator | 2026-04-17 01:08:38.374914 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-17 01:08:38.374919 | orchestrator | Friday 17 April 2026 01:03:54 +0000 (0:00:00.799) 0:03:58.978 ********** 2026-04-17 01:08:38.374925 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.374930 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.374935 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.374941 | orchestrator | 2026-04-17 01:08:38.374946 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-17 01:08:38.374951 | orchestrator | Friday 17 April 2026 01:04:12 +0000 (0:00:18.506) 0:04:17.485 ********** 2026-04-17 01:08:38.374957 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.374965 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.374975 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.374984 | orchestrator | 2026-04-17 01:08:38.374994 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-17 01:08:38.375002 | orchestrator | 2026-04-17 01:08:38.375012 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 01:08:38.375021 | orchestrator | Friday 17 April 2026 01:04:18 +0000 (0:00:05.931) 0:04:23.416 ********** 2026-04-17 01:08:38.375031 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.375041 | orchestrator | 2026-04-17 01:08:38.375050 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 01:08:38.375059 | orchestrator | Friday 17 April 2026 01:04:20 +0000 (0:00:01.128) 0:04:24.545 ********** 2026-04-17 01:08:38.375073 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.375081 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.375089 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.375097 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.375107 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.375115 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.375123 | orchestrator | 2026-04-17 01:08:38.375132 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-17 01:08:38.375140 | orchestrator | Friday 17 April 2026 01:04:20 +0000 (0:00:00.845) 0:04:25.390 ********** 2026-04-17 01:08:38.375148 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.375156 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.375164 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.375173 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:08:38.375181 | orchestrator | 2026-04-17 01:08:38.375190 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-17 01:08:38.375335 | orchestrator | Friday 17 April 2026 01:04:21 +0000 (0:00:00.952) 0:04:26.343 ********** 2026-04-17 01:08:38.375351 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-17 01:08:38.375360 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-17 01:08:38.375368 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-17 01:08:38.375376 | orchestrator | 2026-04-17 01:08:38.375384 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-17 01:08:38.375393 | orchestrator | Friday 17 April 2026 01:04:22 +0000 (0:00:01.196) 0:04:27.540 ********** 2026-04-17 01:08:38.375402 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-17 01:08:38.375410 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-17 01:08:38.375419 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-17 01:08:38.375428 | orchestrator | 2026-04-17 01:08:38.375436 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-17 01:08:38.375457 | orchestrator | Friday 17 April 2026 01:04:24 +0000 (0:00:01.232) 0:04:28.772 ********** 2026-04-17 01:08:38.375466 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-17 01:08:38.375475 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.375484 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-17 01:08:38.375493 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.375501 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-17 01:08:38.375509 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.375518 | orchestrator | 2026-04-17 01:08:38.375526 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-17 01:08:38.375534 | orchestrator | Friday 17 April 2026 01:04:24 +0000 (0:00:00.723) 0:04:29.495 ********** 2026-04-17 01:08:38.375543 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 01:08:38.375552 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 01:08:38.375559 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.375565 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 01:08:38.375574 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 01:08:38.375583 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.375591 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-17 01:08:38.375600 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-17 01:08:38.375609 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.375617 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 01:08:38.375626 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 01:08:38.375634 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-17 01:08:38.375642 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 01:08:38.375650 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 01:08:38.375658 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-17 01:08:38.375666 | orchestrator | 2026-04-17 01:08:38.375674 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-17 01:08:38.375683 | orchestrator | Friday 17 April 2026 01:04:27 +0000 (0:00:02.220) 0:04:31.716 ********** 2026-04-17 01:08:38.375692 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.375699 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.375708 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.375716 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.375726 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.375734 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.375743 | orchestrator | 2026-04-17 01:08:38.375752 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-17 01:08:38.375762 | orchestrator | Friday 17 April 2026 01:04:28 +0000 (0:00:01.430) 0:04:33.146 ********** 2026-04-17 01:08:38.375770 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.375776 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.375781 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.375786 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.375792 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.375797 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.375802 | orchestrator | 2026-04-17 01:08:38.375808 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-17 01:08:38.375814 | orchestrator | Friday 17 April 2026 01:04:30 +0000 (0:00:01.920) 0:04:35.067 ********** 2026-04-17 01:08:38.375829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375901 | orchestrator | 2026-04-17 01:08:38 | INFO  | Task c8b7321f-8470-409e-96fc-81daf018d767 is in state SUCCESS 2026-04-17 01:08:38.375910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.375995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376077 | orchestrator | 2026-04-17 01:08:38.376082 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 01:08:38.376088 | orchestrator | Friday 17 April 2026 01:04:33 +0000 (0:00:02.861) 0:04:37.929 ********** 2026-04-17 01:08:38.376094 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:08:38.376101 | orchestrator | 2026-04-17 01:08:38.376106 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-17 01:08:38.376112 | orchestrator | Friday 17 April 2026 01:04:34 +0000 (0:00:01.168) 0:04:39.098 ********** 2026-04-17 01:08:38.376117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.376348 | orchestrator | 2026-04-17 01:08:38.376356 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-17 01:08:38.376365 | orchestrator | Friday 17 April 2026 01:04:37 +0000 (0:00:03.315) 0:04:42.414 ********** 2026-04-17 01:08:38.376397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376425 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.376439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376498 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.376507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376560 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.376570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376595 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.376636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376656 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.376664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376690 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.376699 | orchestrator | 2026-04-17 01:08:38.376707 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-17 01:08:38.376716 | orchestrator | Friday 17 April 2026 01:04:39 +0000 (0:00:01.928) 0:04:44.342 ********** 2026-04-17 01:08:38.376725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376786 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.376791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376830 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.376839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.376862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.376869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376875 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.376880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.376914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376923 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.376937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376947 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.376980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.376989 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.376995 | orchestrator | 2026-04-17 01:08:38.377000 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 01:08:38.377006 | orchestrator | Friday 17 April 2026 01:04:42 +0000 (0:00:02.341) 0:04:46.684 ********** 2026-04-17 01:08:38.377011 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.377017 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.377022 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.377037 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:08:38.377043 | orchestrator | 2026-04-17 01:08:38.377049 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-17 01:08:38.377054 | orchestrator | Friday 17 April 2026 01:04:43 +0000 (0:00:00.927) 0:04:47.611 ********** 2026-04-17 01:08:38.377059 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 01:08:38.377065 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 01:08:38.377070 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 01:08:38.377075 | orchestrator | 2026-04-17 01:08:38.377081 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-17 01:08:38.377086 | orchestrator | Friday 17 April 2026 01:04:44 +0000 (0:00:00.962) 0:04:48.574 ********** 2026-04-17 01:08:38.377092 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 01:08:38.377097 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 01:08:38.377102 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 01:08:38.377108 | orchestrator | 2026-04-17 01:08:38.377113 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-17 01:08:38.377118 | orchestrator | Friday 17 April 2026 01:04:45 +0000 (0:00:01.230) 0:04:49.804 ********** 2026-04-17 01:08:38.377124 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:08:38.377129 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:08:38.377135 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:08:38.377140 | orchestrator | 2026-04-17 01:08:38.377145 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-17 01:08:38.377151 | orchestrator | Friday 17 April 2026 01:04:45 +0000 (0:00:00.525) 0:04:50.329 ********** 2026-04-17 01:08:38.377156 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:08:38.377161 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:08:38.377167 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:08:38.377172 | orchestrator | 2026-04-17 01:08:38.377177 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-17 01:08:38.377183 | orchestrator | Friday 17 April 2026 01:04:46 +0000 (0:00:00.478) 0:04:50.808 ********** 2026-04-17 01:08:38.377188 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 01:08:38.377194 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 01:08:38.377199 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 01:08:38.377205 | orchestrator | 2026-04-17 01:08:38.377230 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-17 01:08:38.377241 | orchestrator | Friday 17 April 2026 01:04:47 +0000 (0:00:01.317) 0:04:52.125 ********** 2026-04-17 01:08:38.377251 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 01:08:38.377259 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 01:08:38.377268 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 01:08:38.377277 | orchestrator | 2026-04-17 01:08:38.377285 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-17 01:08:38.377293 | orchestrator | Friday 17 April 2026 01:04:49 +0000 (0:00:01.578) 0:04:53.704 ********** 2026-04-17 01:08:38.377302 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-17 01:08:38.377311 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-17 01:08:38.377320 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-17 01:08:38.377329 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-17 01:08:38.377338 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-17 01:08:38.377346 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-17 01:08:38.377354 | orchestrator | 2026-04-17 01:08:38.377362 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-17 01:08:38.377371 | orchestrator | Friday 17 April 2026 01:04:53 +0000 (0:00:04.829) 0:04:58.533 ********** 2026-04-17 01:08:38.377380 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377399 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.377409 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.377418 | orchestrator | 2026-04-17 01:08:38.377450 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-17 01:08:38.377460 | orchestrator | Friday 17 April 2026 01:04:54 +0000 (0:00:00.274) 0:04:58.808 ********** 2026-04-17 01:08:38.377471 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377480 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.377490 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.377500 | orchestrator | 2026-04-17 01:08:38.377510 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-17 01:08:38.377519 | orchestrator | Friday 17 April 2026 01:04:54 +0000 (0:00:00.230) 0:04:59.038 ********** 2026-04-17 01:08:38.377530 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.377539 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.377550 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.377559 | orchestrator | 2026-04-17 01:08:38.377569 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-17 01:08:38.377616 | orchestrator | Friday 17 April 2026 01:04:55 +0000 (0:00:01.412) 0:05:00.451 ********** 2026-04-17 01:08:38.377629 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 01:08:38.377640 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 01:08:38.377649 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-17 01:08:38.377658 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 01:08:38.377667 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 01:08:38.377676 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-17 01:08:38.377685 | orchestrator | 2026-04-17 01:08:38.377693 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-17 01:08:38.377703 | orchestrator | Friday 17 April 2026 01:04:58 +0000 (0:00:02.922) 0:05:03.373 ********** 2026-04-17 01:08:38.377709 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 01:08:38.377715 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 01:08:38.377720 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 01:08:38.377726 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-17 01:08:38.377731 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.377737 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-17 01:08:38.377742 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.377747 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-17 01:08:38.377753 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.377758 | orchestrator | 2026-04-17 01:08:38.377763 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-17 01:08:38.377769 | orchestrator | Friday 17 April 2026 01:05:02 +0000 (0:00:03.351) 0:05:06.725 ********** 2026-04-17 01:08:38.377774 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.377780 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.377785 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.377790 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-17 01:08:38.377796 | orchestrator | 2026-04-17 01:08:38.377801 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-17 01:08:38.377807 | orchestrator | Friday 17 April 2026 01:05:04 +0000 (0:00:02.051) 0:05:08.777 ********** 2026-04-17 01:08:38.377818 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 01:08:38.377824 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-17 01:08:38.377829 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-17 01:08:38.377834 | orchestrator | 2026-04-17 01:08:38.377840 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-17 01:08:38.377845 | orchestrator | Friday 17 April 2026 01:05:05 +0000 (0:00:00.908) 0:05:09.685 ********** 2026-04-17 01:08:38.377850 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377856 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.377861 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.377866 | orchestrator | 2026-04-17 01:08:38.377872 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-17 01:08:38.377877 | orchestrator | Friday 17 April 2026 01:05:05 +0000 (0:00:00.270) 0:05:09.955 ********** 2026-04-17 01:08:38.377883 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377888 | orchestrator | 2026-04-17 01:08:38.377893 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-17 01:08:38.377899 | orchestrator | Friday 17 April 2026 01:05:05 +0000 (0:00:00.148) 0:05:10.104 ********** 2026-04-17 01:08:38.377904 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377910 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.377915 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.377921 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.377926 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.377931 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.377937 | orchestrator | 2026-04-17 01:08:38.377942 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-17 01:08:38.377947 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:00.702) 0:05:10.807 ********** 2026-04-17 01:08:38.377953 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-17 01:08:38.377958 | orchestrator | 2026-04-17 01:08:38.377963 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-17 01:08:38.377972 | orchestrator | Friday 17 April 2026 01:05:06 +0000 (0:00:00.727) 0:05:11.534 ********** 2026-04-17 01:08:38.377978 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.377987 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.377996 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.378004 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.378048 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.378058 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.378064 | orchestrator | 2026-04-17 01:08:38.378069 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-17 01:08:38.378074 | orchestrator | Friday 17 April 2026 01:05:07 +0000 (0:00:00.605) 0:05:12.140 ********** 2026-04-17 01:08:38.378107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.378181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379581 | orchestrator | 2026-04-17 01:08:38.379586 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-17 01:08:38.379591 | orchestrator | Friday 17 April 2026 01:05:12 +0000 (0:00:05.340) 0:05:17.480 ********** 2026-04-17 01:08:38.379597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.379602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.379615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.379620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.379637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.379643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.379648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.379745 | orchestrator | 2026-04-17 01:08:38.379750 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-17 01:08:38.379759 | orchestrator | Friday 17 April 2026 01:05:21 +0000 (0:00:08.523) 0:05:26.004 ********** 2026-04-17 01:08:38.379764 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.379773 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.379778 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.379783 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.379788 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.379792 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.379797 | orchestrator | 2026-04-17 01:08:38.379802 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-17 01:08:38.379807 | orchestrator | Friday 17 April 2026 01:05:23 +0000 (0:00:01.888) 0:05:27.893 ********** 2026-04-17 01:08:38.379812 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 01:08:38.379817 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 01:08:38.379822 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-17 01:08:38.379827 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 01:08:38.379832 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 01:08:38.379837 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.379841 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 01:08:38.379846 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-17 01:08:38.379851 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 01:08:38.379856 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.379861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-17 01:08:38.379865 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.379870 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 01:08:38.379875 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 01:08:38.379880 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-17 01:08:38.379885 | orchestrator | 2026-04-17 01:08:38.379890 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-17 01:08:38.379894 | orchestrator | Friday 17 April 2026 01:05:27 +0000 (0:00:04.240) 0:05:32.133 ********** 2026-04-17 01:08:38.379899 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.379904 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.379909 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.379913 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.379918 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.379923 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.379928 | orchestrator | 2026-04-17 01:08:38.379932 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-17 01:08:38.379937 | orchestrator | Friday 17 April 2026 01:05:28 +0000 (0:00:00.764) 0:05:32.897 ********** 2026-04-17 01:08:38.379942 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 01:08:38.379947 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 01:08:38.379952 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 01:08:38.379957 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 01:08:38.379961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-17 01:08:38.379970 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-17 01:08:38.379975 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.379980 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.379985 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.379990 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.379997 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.380002 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380007 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.380011 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380016 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-17 01:08:38.380021 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380026 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.380031 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.380039 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.380044 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.380049 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-17 01:08:38.380054 | orchestrator | 2026-04-17 01:08:38.380059 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-17 01:08:38.380063 | orchestrator | Friday 17 April 2026 01:05:34 +0000 (0:00:05.963) 0:05:38.860 ********** 2026-04-17 01:08:38.380068 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 01:08:38.380073 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 01:08:38.380078 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-17 01:08:38.380083 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 01:08:38.380088 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 01:08:38.380092 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-17 01:08:38.380097 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 01:08:38.380102 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 01:08:38.380107 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 01:08:38.380112 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-17 01:08:38.380116 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 01:08:38.380121 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-17 01:08:38.380126 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 01:08:38.380131 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380136 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 01:08:38.380144 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380149 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-17 01:08:38.380154 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380159 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 01:08:38.380163 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 01:08:38.380168 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-17 01:08:38.380173 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 01:08:38.380178 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 01:08:38.380183 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-17 01:08:38.380187 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 01:08:38.380192 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 01:08:38.380197 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-17 01:08:38.380202 | orchestrator | 2026-04-17 01:08:38.380207 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-17 01:08:38.380231 | orchestrator | Friday 17 April 2026 01:05:41 +0000 (0:00:07.017) 0:05:45.877 ********** 2026-04-17 01:08:38.380239 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.380246 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.380253 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.380260 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380268 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380275 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380282 | orchestrator | 2026-04-17 01:08:38.380291 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-17 01:08:38.380299 | orchestrator | Friday 17 April 2026 01:05:41 +0000 (0:00:00.528) 0:05:46.406 ********** 2026-04-17 01:08:38.380311 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.380319 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.380326 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.380334 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380340 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380345 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380350 | orchestrator | 2026-04-17 01:08:38.380355 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-17 01:08:38.380360 | orchestrator | Friday 17 April 2026 01:05:42 +0000 (0:00:00.606) 0:05:47.013 ********** 2026-04-17 01:08:38.380364 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380369 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380374 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380379 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.380383 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.380388 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.380393 | orchestrator | 2026-04-17 01:08:38.380398 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-17 01:08:38.380407 | orchestrator | Friday 17 April 2026 01:05:44 +0000 (0:00:02.005) 0:05:49.019 ********** 2026-04-17 01:08:38.380412 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380416 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380421 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380426 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.380431 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.380435 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.380440 | orchestrator | 2026-04-17 01:08:38.380445 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-17 01:08:38.380455 | orchestrator | Friday 17 April 2026 01:05:46 +0000 (0:00:01.964) 0:05:50.983 ********** 2026-04-17 01:08:38.380461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.380466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.380471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.380476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.380489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380495 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.380500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380509 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.380514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-17 01:08:38.380519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-17 01:08:38.380525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380529 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.380537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.380548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380566 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.380576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380581 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-17 01:08:38.380591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-17 01:08:38.380596 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380601 | orchestrator | 2026-04-17 01:08:38.380606 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-17 01:08:38.380611 | orchestrator | Friday 17 April 2026 01:05:48 +0000 (0:00:01.670) 0:05:52.654 ********** 2026-04-17 01:08:38.380616 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 01:08:38.380621 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380626 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.380631 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 01:08:38.380635 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380640 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.380645 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 01:08:38.380650 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380654 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.380663 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 01:08:38.380681 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380686 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 01:08:38.380691 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380695 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380700 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380705 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 01:08:38.380709 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 01:08:38.380714 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380719 | orchestrator | 2026-04-17 01:08:38.380724 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-17 01:08:38.380729 | orchestrator | Friday 17 April 2026 01:05:49 +0000 (0:00:01.089) 0:05:53.743 ********** 2026-04-17 01:08:38.380739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380811 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-17 01:08:38.380863 | orchestrator | 2026-04-17 01:08:38.380867 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-17 01:08:38.380872 | orchestrator | Friday 17 April 2026 01:05:52 +0000 (0:00:03.273) 0:05:57.016 ********** 2026-04-17 01:08:38.380877 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.380882 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.380887 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.380892 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.380897 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.380901 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.380910 | orchestrator | 2026-04-17 01:08:38.380915 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380920 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.614) 0:05:57.631 ********** 2026-04-17 01:08:38.380925 | orchestrator | 2026-04-17 01:08:38.380930 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380935 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.119) 0:05:57.751 ********** 2026-04-17 01:08:38.380939 | orchestrator | 2026-04-17 01:08:38.380944 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380949 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.117) 0:05:57.869 ********** 2026-04-17 01:08:38.380954 | orchestrator | 2026-04-17 01:08:38.380959 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380963 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.123) 0:05:57.992 ********** 2026-04-17 01:08:38.380968 | orchestrator | 2026-04-17 01:08:38.380973 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380978 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.120) 0:05:58.113 ********** 2026-04-17 01:08:38.380983 | orchestrator | 2026-04-17 01:08:38.380987 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-17 01:08:38.380992 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.225) 0:05:58.338 ********** 2026-04-17 01:08:38.380997 | orchestrator | 2026-04-17 01:08:38.381005 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-17 01:08:38.381010 | orchestrator | Friday 17 April 2026 01:05:53 +0000 (0:00:00.159) 0:05:58.498 ********** 2026-04-17 01:08:38.381014 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.381019 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.381024 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.381029 | orchestrator | 2026-04-17 01:08:38.381033 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-17 01:08:38.381038 | orchestrator | Friday 17 April 2026 01:06:01 +0000 (0:00:07.939) 0:06:06.438 ********** 2026-04-17 01:08:38.381043 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.381048 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.381052 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.381057 | orchestrator | 2026-04-17 01:08:38.381062 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-17 01:08:38.381067 | orchestrator | Friday 17 April 2026 01:06:15 +0000 (0:00:13.108) 0:06:19.546 ********** 2026-04-17 01:08:38.381072 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.381079 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.381085 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.381089 | orchestrator | 2026-04-17 01:08:38.381094 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-17 01:08:38.381099 | orchestrator | Friday 17 April 2026 01:06:31 +0000 (0:00:16.392) 0:06:35.939 ********** 2026-04-17 01:08:38.381104 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.381108 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.381113 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.381118 | orchestrator | 2026-04-17 01:08:38.381123 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-17 01:08:38.381128 | orchestrator | Friday 17 April 2026 01:07:04 +0000 (0:00:33.506) 0:07:09.446 ********** 2026-04-17 01:08:38.381132 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.381137 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.381142 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.381147 | orchestrator | 2026-04-17 01:08:38.381152 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-17 01:08:38.381156 | orchestrator | Friday 17 April 2026 01:07:05 +0000 (0:00:00.989) 0:07:10.436 ********** 2026-04-17 01:08:38.381161 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.381166 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.381175 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.381179 | orchestrator | 2026-04-17 01:08:38.381184 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-17 01:08:38.381189 | orchestrator | Friday 17 April 2026 01:07:06 +0000 (0:00:00.764) 0:07:11.200 ********** 2026-04-17 01:08:38.381194 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:08:38.381199 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:08:38.381203 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:08:38.381208 | orchestrator | 2026-04-17 01:08:38.381253 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-17 01:08:38.381259 | orchestrator | Friday 17 April 2026 01:07:31 +0000 (0:00:24.570) 0:07:35.771 ********** 2026-04-17 01:08:38.381264 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.381269 | orchestrator | 2026-04-17 01:08:38.381273 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-17 01:08:38.381278 | orchestrator | Friday 17 April 2026 01:07:31 +0000 (0:00:00.277) 0:07:36.049 ********** 2026-04-17 01:08:38.381283 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.381288 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.381292 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381297 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381302 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381307 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-17 01:08:38.381312 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 01:08:38.381317 | orchestrator | 2026-04-17 01:08:38.381321 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-17 01:08:38.381326 | orchestrator | Friday 17 April 2026 01:07:52 +0000 (0:00:20.732) 0:07:56.781 ********** 2026-04-17 01:08:38.381331 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.381336 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381340 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.381345 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381350 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381355 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.381359 | orchestrator | 2026-04-17 01:08:38.381364 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-17 01:08:38.381369 | orchestrator | Friday 17 April 2026 01:07:58 +0000 (0:00:05.796) 0:08:02.577 ********** 2026-04-17 01:08:38.381374 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.381378 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.381383 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381388 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381392 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381397 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-17 01:08:38.381402 | orchestrator | 2026-04-17 01:08:38.381407 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-17 01:08:38.381412 | orchestrator | Friday 17 April 2026 01:08:00 +0000 (0:00:02.072) 0:08:04.650 ********** 2026-04-17 01:08:38.381416 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 01:08:38.381421 | orchestrator | 2026-04-17 01:08:38.381426 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-17 01:08:38.381431 | orchestrator | Friday 17 April 2026 01:08:14 +0000 (0:00:14.246) 0:08:18.896 ********** 2026-04-17 01:08:38.381436 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 01:08:38.381440 | orchestrator | 2026-04-17 01:08:38.381445 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-17 01:08:38.381450 | orchestrator | Friday 17 April 2026 01:08:15 +0000 (0:00:00.907) 0:08:19.804 ********** 2026-04-17 01:08:38.381458 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.381463 | orchestrator | 2026-04-17 01:08:38.381482 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-17 01:08:38.381487 | orchestrator | Friday 17 April 2026 01:08:16 +0000 (0:00:00.875) 0:08:20.679 ********** 2026-04-17 01:08:38.381492 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-17 01:08:38.381496 | orchestrator | 2026-04-17 01:08:38.381501 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-17 01:08:38.381506 | orchestrator | Friday 17 April 2026 01:08:29 +0000 (0:00:13.722) 0:08:34.401 ********** 2026-04-17 01:08:38.381511 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:08:38.381516 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:08:38.381521 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:08:38.381525 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:08:38.381530 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:08:38.381535 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:08:38.381539 | orchestrator | 2026-04-17 01:08:38.381544 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-17 01:08:38.381549 | orchestrator | 2026-04-17 01:08:38.381557 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-17 01:08:38.381562 | orchestrator | Friday 17 April 2026 01:08:31 +0000 (0:00:01.605) 0:08:36.007 ********** 2026-04-17 01:08:38.381567 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:08:38.381572 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:08:38.381577 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:08:38.381582 | orchestrator | 2026-04-17 01:08:38.381586 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-17 01:08:38.381591 | orchestrator | 2026-04-17 01:08:38.381596 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-17 01:08:38.381601 | orchestrator | Friday 17 April 2026 01:08:32 +0000 (0:00:01.116) 0:08:37.123 ********** 2026-04-17 01:08:38.381606 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381610 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381615 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381620 | orchestrator | 2026-04-17 01:08:38.381625 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-17 01:08:38.381630 | orchestrator | 2026-04-17 01:08:38.381634 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-17 01:08:38.381639 | orchestrator | Friday 17 April 2026 01:08:33 +0000 (0:00:00.554) 0:08:37.678 ********** 2026-04-17 01:08:38.381644 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-17 01:08:38.381649 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-17 01:08:38.381654 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381659 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-17 01:08:38.381664 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-17 01:08:38.381668 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381673 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:08:38.381678 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-17 01:08:38.381683 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-17 01:08:38.381688 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381692 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-17 01:08:38.381697 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-17 01:08:38.381702 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381707 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:08:38.381712 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-17 01:08:38.381716 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-17 01:08:38.381721 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381726 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-17 01:08:38.381736 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-17 01:08:38.381741 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381746 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:08:38.381751 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-17 01:08:38.381756 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-17 01:08:38.381760 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381765 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-17 01:08:38.381770 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-17 01:08:38.381775 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381780 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-17 01:08:38.381785 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-17 01:08:38.381789 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381794 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-17 01:08:38.381799 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-17 01:08:38.381804 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381809 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381813 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381818 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-17 01:08:38.381823 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-17 01:08:38.381828 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-17 01:08:38.381833 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-17 01:08:38.381837 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-17 01:08:38.381845 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-17 01:08:38.381850 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381855 | orchestrator | 2026-04-17 01:08:38.381859 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-17 01:08:38.381864 | orchestrator | 2026-04-17 01:08:38.381868 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-17 01:08:38.381873 | orchestrator | Friday 17 April 2026 01:08:34 +0000 (0:00:01.315) 0:08:38.993 ********** 2026-04-17 01:08:38.381878 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-17 01:08:38.381882 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-17 01:08:38.381887 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381891 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-17 01:08:38.381896 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-17 01:08:38.381900 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381905 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-17 01:08:38.381913 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-17 01:08:38.381918 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.381923 | orchestrator | 2026-04-17 01:08:38.381927 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-17 01:08:38.381932 | orchestrator | 2026-04-17 01:08:38.381936 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-17 01:08:38.381941 | orchestrator | Friday 17 April 2026 01:08:35 +0000 (0:00:00.655) 0:08:39.649 ********** 2026-04-17 01:08:38.381945 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381950 | orchestrator | 2026-04-17 01:08:38.381954 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-17 01:08:38.381959 | orchestrator | 2026-04-17 01:08:38.381964 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-17 01:08:38.381976 | orchestrator | Friday 17 April 2026 01:08:35 +0000 (0:00:00.629) 0:08:40.280 ********** 2026-04-17 01:08:38.381983 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:08:38.381992 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:08:38.381999 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:08:38.382007 | orchestrator | 2026-04-17 01:08:38.382073 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:08:38.382086 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:08:38.382095 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-17 01:08:38.382103 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-17 01:08:38.382111 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-17 01:08:38.382119 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-17 01:08:38.382127 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-17 01:08:38.382135 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-17 01:08:38.382142 | orchestrator | 2026-04-17 01:08:38.382150 | orchestrator | 2026-04-17 01:08:38.382158 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:08:38.382165 | orchestrator | Friday 17 April 2026 01:08:36 +0000 (0:00:00.518) 0:08:40.798 ********** 2026-04-17 01:08:38.382173 | orchestrator | =============================================================================== 2026-04-17 01:08:38.382181 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.61s 2026-04-17 01:08:38.382189 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 33.51s 2026-04-17 01:08:38.382197 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.57s 2026-04-17 01:08:38.382205 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.87s 2026-04-17 01:08:38.382235 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.73s 2026-04-17 01:08:38.382244 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.60s 2026-04-17 01:08:38.382252 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.51s 2026-04-17 01:08:38.382259 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.89s 2026-04-17 01:08:38.382266 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.39s 2026-04-17 01:08:38.382273 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.78s 2026-04-17 01:08:38.382281 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.25s 2026-04-17 01:08:38.382288 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.05s 2026-04-17 01:08:38.382295 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.72s 2026-04-17 01:08:38.382301 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.11s 2026-04-17 01:08:38.382308 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2026-04-17 01:08:38.382323 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.15s 2026-04-17 01:08:38.382331 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.52s 2026-04-17 01:08:38.382339 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.20s 2026-04-17 01:08:38.382355 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.05s 2026-04-17 01:08:38.382362 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.94s 2026-04-17 01:08:38.382370 | orchestrator | 2026-04-17 01:08:38 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:38.382377 | orchestrator | 2026-04-17 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:41.409903 | orchestrator | 2026-04-17 01:08:41 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:41.409975 | orchestrator | 2026-04-17 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:44.451051 | orchestrator | 2026-04-17 01:08:44 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:44.451136 | orchestrator | 2026-04-17 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:47.494926 | orchestrator | 2026-04-17 01:08:47 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:47.495418 | orchestrator | 2026-04-17 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:50.539699 | orchestrator | 2026-04-17 01:08:50 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:50.539772 | orchestrator | 2026-04-17 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:53.582813 | orchestrator | 2026-04-17 01:08:53 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:53.582900 | orchestrator | 2026-04-17 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:56.620040 | orchestrator | 2026-04-17 01:08:56 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:56.620145 | orchestrator | 2026-04-17 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:08:59.665340 | orchestrator | 2026-04-17 01:08:59 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:08:59.665427 | orchestrator | 2026-04-17 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:02.706644 | orchestrator | 2026-04-17 01:09:02 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:02.706717 | orchestrator | 2026-04-17 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:05.754178 | orchestrator | 2026-04-17 01:09:05 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:05.754357 | orchestrator | 2026-04-17 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:08.793036 | orchestrator | 2026-04-17 01:09:08 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:08.793128 | orchestrator | 2026-04-17 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:11.832019 | orchestrator | 2026-04-17 01:09:11 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:11.832105 | orchestrator | 2026-04-17 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:14.879451 | orchestrator | 2026-04-17 01:09:14 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:14.879540 | orchestrator | 2026-04-17 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:17.926126 | orchestrator | 2026-04-17 01:09:17 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:17.926247 | orchestrator | 2026-04-17 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:20.966730 | orchestrator | 2026-04-17 01:09:20 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:20.966804 | orchestrator | 2026-04-17 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:24.018686 | orchestrator | 2026-04-17 01:09:24 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:24.018794 | orchestrator | 2026-04-17 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:27.057926 | orchestrator | 2026-04-17 01:09:27 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:27.058081 | orchestrator | 2026-04-17 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:30.092445 | orchestrator | 2026-04-17 01:09:30 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:30.092530 | orchestrator | 2026-04-17 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:33.131469 | orchestrator | 2026-04-17 01:09:33 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:33.131538 | orchestrator | 2026-04-17 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:36.171674 | orchestrator | 2026-04-17 01:09:36 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:36.171795 | orchestrator | 2026-04-17 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:39.222932 | orchestrator | 2026-04-17 01:09:39 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:39.223016 | orchestrator | 2026-04-17 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:42.263721 | orchestrator | 2026-04-17 01:09:42 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:42.263808 | orchestrator | 2026-04-17 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:45.301908 | orchestrator | 2026-04-17 01:09:45 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:45.302005 | orchestrator | 2026-04-17 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:48.349956 | orchestrator | 2026-04-17 01:09:48 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:48.350077 | orchestrator | 2026-04-17 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:51.396123 | orchestrator | 2026-04-17 01:09:51 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:51.396194 | orchestrator | 2026-04-17 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:54.442299 | orchestrator | 2026-04-17 01:09:54 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:54.442376 | orchestrator | 2026-04-17 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:09:57.490262 | orchestrator | 2026-04-17 01:09:57 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:09:57.490350 | orchestrator | 2026-04-17 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:00.530362 | orchestrator | 2026-04-17 01:10:00 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:00.530431 | orchestrator | 2026-04-17 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:03.577932 | orchestrator | 2026-04-17 01:10:03 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:03.578065 | orchestrator | 2026-04-17 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:06.617508 | orchestrator | 2026-04-17 01:10:06 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:06.617586 | orchestrator | 2026-04-17 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:09.653627 | orchestrator | 2026-04-17 01:10:09 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:09.653717 | orchestrator | 2026-04-17 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:12.693555 | orchestrator | 2026-04-17 01:10:12 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:12.693659 | orchestrator | 2026-04-17 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:15.737327 | orchestrator | 2026-04-17 01:10:15 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:15.737404 | orchestrator | 2026-04-17 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:18.779566 | orchestrator | 2026-04-17 01:10:18 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:18.780679 | orchestrator | 2026-04-17 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:21.822919 | orchestrator | 2026-04-17 01:10:21 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:21.822989 | orchestrator | 2026-04-17 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:24.874549 | orchestrator | 2026-04-17 01:10:24 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:24.874631 | orchestrator | 2026-04-17 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:27.917290 | orchestrator | 2026-04-17 01:10:27 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:27.917404 | orchestrator | 2026-04-17 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:30.961418 | orchestrator | 2026-04-17 01:10:30 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:30.961490 | orchestrator | 2026-04-17 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:34.009320 | orchestrator | 2026-04-17 01:10:34 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:34.009406 | orchestrator | 2026-04-17 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:37.052999 | orchestrator | 2026-04-17 01:10:37 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:37.053085 | orchestrator | 2026-04-17 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:40.094861 | orchestrator | 2026-04-17 01:10:40 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:40.094975 | orchestrator | 2026-04-17 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:43.138462 | orchestrator | 2026-04-17 01:10:43 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:43.138620 | orchestrator | 2026-04-17 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:46.185096 | orchestrator | 2026-04-17 01:10:46 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:46.186073 | orchestrator | 2026-04-17 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:49.231059 | orchestrator | 2026-04-17 01:10:49 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:49.231149 | orchestrator | 2026-04-17 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:52.273616 | orchestrator | 2026-04-17 01:10:52 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:52.273688 | orchestrator | 2026-04-17 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:55.317531 | orchestrator | 2026-04-17 01:10:55 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:55.317610 | orchestrator | 2026-04-17 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:10:58.356485 | orchestrator | 2026-04-17 01:10:58 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:10:58.356555 | orchestrator | 2026-04-17 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:01.408401 | orchestrator | 2026-04-17 01:11:01 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:01.408474 | orchestrator | 2026-04-17 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:04.450987 | orchestrator | 2026-04-17 01:11:04 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:04.451067 | orchestrator | 2026-04-17 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:07.494921 | orchestrator | 2026-04-17 01:11:07 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:07.495105 | orchestrator | 2026-04-17 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:10.535305 | orchestrator | 2026-04-17 01:11:10 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:10.535376 | orchestrator | 2026-04-17 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:13.583158 | orchestrator | 2026-04-17 01:11:13 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:13.583321 | orchestrator | 2026-04-17 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:16.625764 | orchestrator | 2026-04-17 01:11:16 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:16.625850 | orchestrator | 2026-04-17 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:19.671754 | orchestrator | 2026-04-17 01:11:19 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:19.671832 | orchestrator | 2026-04-17 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:22.714260 | orchestrator | 2026-04-17 01:11:22 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:22.714360 | orchestrator | 2026-04-17 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:25.758088 | orchestrator | 2026-04-17 01:11:25 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:25.758218 | orchestrator | 2026-04-17 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:28.794724 | orchestrator | 2026-04-17 01:11:28 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:28.794807 | orchestrator | 2026-04-17 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:31.842771 | orchestrator | 2026-04-17 01:11:31 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state STARTED 2026-04-17 01:11:31.842835 | orchestrator | 2026-04-17 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-17 01:11:34.889706 | orchestrator | 2026-04-17 01:11:34 | INFO  | Task ace018c1-1cc3-435b-bfdd-2ea4ede11066 is in state SUCCESS 2026-04-17 01:11:34.891901 | orchestrator | 2026-04-17 01:11:34.891959 | orchestrator | 2026-04-17 01:11:34.891967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:11:34.891990 | orchestrator | 2026-04-17 01:11:34.891997 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:11:34.892002 | orchestrator | Friday 17 April 2026 01:06:43 +0000 (0:00:00.274) 0:00:00.274 ********** 2026-04-17 01:11:34.892008 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892014 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:11:34.892019 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:11:34.892024 | orchestrator | 2026-04-17 01:11:34.892030 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:11:34.892035 | orchestrator | Friday 17 April 2026 01:06:43 +0000 (0:00:00.272) 0:00:00.547 ********** 2026-04-17 01:11:34.892040 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-17 01:11:34.892046 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-17 01:11:34.892051 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-17 01:11:34.892056 | orchestrator | 2026-04-17 01:11:34.892062 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-17 01:11:34.892067 | orchestrator | 2026-04-17 01:11:34.892072 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.892077 | orchestrator | Friday 17 April 2026 01:06:43 +0000 (0:00:00.268) 0:00:00.816 ********** 2026-04-17 01:11:34.892083 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:11:34.892089 | orchestrator | 2026-04-17 01:11:34.892094 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-17 01:11:34.892100 | orchestrator | Friday 17 April 2026 01:06:44 +0000 (0:00:00.539) 0:00:01.355 ********** 2026-04-17 01:11:34.892105 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-17 01:11:34.892110 | orchestrator | 2026-04-17 01:11:34.892115 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-17 01:11:34.892121 | orchestrator | Friday 17 April 2026 01:06:47 +0000 (0:00:03.430) 0:00:04.786 ********** 2026-04-17 01:11:34.892126 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-17 01:11:34.892132 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-17 01:11:34.892136 | orchestrator | 2026-04-17 01:11:34.892139 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-17 01:11:34.892142 | orchestrator | Friday 17 April 2026 01:06:54 +0000 (0:00:07.301) 0:00:12.088 ********** 2026-04-17 01:11:34.892145 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-17 01:11:34.892148 | orchestrator | 2026-04-17 01:11:34.892152 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-17 01:11:34.892155 | orchestrator | Friday 17 April 2026 01:06:58 +0000 (0:00:03.215) 0:00:15.304 ********** 2026-04-17 01:11:34.892158 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-17 01:11:34.892161 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-17 01:11:34.892164 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-17 01:11:34.892167 | orchestrator | 2026-04-17 01:11:34.892170 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-17 01:11:34.892205 | orchestrator | Friday 17 April 2026 01:07:06 +0000 (0:00:08.779) 0:00:24.084 ********** 2026-04-17 01:11:34.892211 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-17 01:11:34.892217 | orchestrator | 2026-04-17 01:11:34.892220 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-17 01:11:34.892223 | orchestrator | Friday 17 April 2026 01:07:10 +0000 (0:00:03.736) 0:00:27.820 ********** 2026-04-17 01:11:34.892227 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-17 01:11:34.892230 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-17 01:11:34.892237 | orchestrator | 2026-04-17 01:11:34.892240 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-17 01:11:34.892243 | orchestrator | Friday 17 April 2026 01:07:18 +0000 (0:00:08.078) 0:00:35.898 ********** 2026-04-17 01:11:34.892246 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-17 01:11:34.892249 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-17 01:11:34.892252 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-17 01:11:34.892256 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-17 01:11:34.892259 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-17 01:11:34.892262 | orchestrator | 2026-04-17 01:11:34.892272 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.892275 | orchestrator | Friday 17 April 2026 01:07:35 +0000 (0:00:16.708) 0:00:52.606 ********** 2026-04-17 01:11:34.892278 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:11:34.892281 | orchestrator | 2026-04-17 01:11:34.892284 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-17 01:11:34.892287 | orchestrator | Friday 17 April 2026 01:07:35 +0000 (0:00:00.571) 0:00:53.178 ********** 2026-04-17 01:11:34.892290 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892293 | orchestrator | 2026-04-17 01:11:34.892296 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-17 01:11:34.892299 | orchestrator | Friday 17 April 2026 01:07:40 +0000 (0:00:04.360) 0:00:57.538 ********** 2026-04-17 01:11:34.892302 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892305 | orchestrator | 2026-04-17 01:11:34.892308 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-17 01:11:34.892445 | orchestrator | Friday 17 April 2026 01:07:44 +0000 (0:00:04.634) 0:01:02.173 ********** 2026-04-17 01:11:34.892451 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892456 | orchestrator | 2026-04-17 01:11:34.892461 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-17 01:11:34.892543 | orchestrator | Friday 17 April 2026 01:07:48 +0000 (0:00:03.987) 0:01:06.160 ********** 2026-04-17 01:11:34.892547 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-17 01:11:34.892550 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-17 01:11:34.892553 | orchestrator | 2026-04-17 01:11:34.892557 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-17 01:11:34.892560 | orchestrator | Friday 17 April 2026 01:08:00 +0000 (0:00:11.138) 0:01:17.299 ********** 2026-04-17 01:11:34.892563 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-17 01:11:34.892566 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-17 01:11:34.892570 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-17 01:11:34.892573 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-17 01:11:34.892577 | orchestrator | 2026-04-17 01:11:34.892580 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-17 01:11:34.892583 | orchestrator | Friday 17 April 2026 01:08:16 +0000 (0:00:16.632) 0:01:33.931 ********** 2026-04-17 01:11:34.892586 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892589 | orchestrator | 2026-04-17 01:11:34.892592 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-17 01:11:34.892625 | orchestrator | Friday 17 April 2026 01:08:21 +0000 (0:00:05.001) 0:01:38.933 ********** 2026-04-17 01:11:34.892629 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892639 | orchestrator | 2026-04-17 01:11:34.892642 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-17 01:11:34.892645 | orchestrator | Friday 17 April 2026 01:08:27 +0000 (0:00:06.194) 0:01:45.127 ********** 2026-04-17 01:11:34.892671 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.892675 | orchestrator | 2026-04-17 01:11:34.892679 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-17 01:11:34.892684 | orchestrator | Friday 17 April 2026 01:08:28 +0000 (0:00:00.513) 0:01:45.641 ********** 2026-04-17 01:11:34.892687 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892690 | orchestrator | 2026-04-17 01:11:34.892694 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.892697 | orchestrator | Friday 17 April 2026 01:08:33 +0000 (0:00:05.015) 0:01:50.657 ********** 2026-04-17 01:11:34.892700 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:11:34.892703 | orchestrator | 2026-04-17 01:11:34.892706 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-17 01:11:34.892709 | orchestrator | Friday 17 April 2026 01:08:34 +0000 (0:00:00.905) 0:01:51.563 ********** 2026-04-17 01:11:34.892712 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892715 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892718 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892721 | orchestrator | 2026-04-17 01:11:34.892724 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-17 01:11:34.892727 | orchestrator | Friday 17 April 2026 01:08:39 +0000 (0:00:05.592) 0:01:57.155 ********** 2026-04-17 01:11:34.892730 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892733 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892736 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892739 | orchestrator | 2026-04-17 01:11:34.892743 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-17 01:11:34.892746 | orchestrator | Friday 17 April 2026 01:08:44 +0000 (0:00:05.066) 0:02:02.222 ********** 2026-04-17 01:11:34.892749 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892752 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892755 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892758 | orchestrator | 2026-04-17 01:11:34.892761 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-17 01:11:34.892764 | orchestrator | Friday 17 April 2026 01:08:45 +0000 (0:00:00.781) 0:02:03.004 ********** 2026-04-17 01:11:34.892767 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892770 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:11:34.892773 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:11:34.892776 | orchestrator | 2026-04-17 01:11:34.892779 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-17 01:11:34.892786 | orchestrator | Friday 17 April 2026 01:08:47 +0000 (0:00:01.945) 0:02:04.949 ********** 2026-04-17 01:11:34.892789 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892792 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892795 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892798 | orchestrator | 2026-04-17 01:11:34.892801 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-17 01:11:34.892804 | orchestrator | Friday 17 April 2026 01:08:49 +0000 (0:00:01.295) 0:02:06.245 ********** 2026-04-17 01:11:34.892807 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892810 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892813 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892816 | orchestrator | 2026-04-17 01:11:34.892819 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-17 01:11:34.892822 | orchestrator | Friday 17 April 2026 01:08:50 +0000 (0:00:01.154) 0:02:07.399 ********** 2026-04-17 01:11:34.892825 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892828 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892831 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892837 | orchestrator | 2026-04-17 01:11:34.892862 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-17 01:11:34.892866 | orchestrator | Friday 17 April 2026 01:08:52 +0000 (0:00:02.234) 0:02:09.634 ********** 2026-04-17 01:11:34.892869 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.892872 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.892875 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.892878 | orchestrator | 2026-04-17 01:11:34.892881 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-17 01:11:34.892884 | orchestrator | Friday 17 April 2026 01:08:53 +0000 (0:00:01.569) 0:02:11.204 ********** 2026-04-17 01:11:34.892888 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892891 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:11:34.892899 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:11:34.892902 | orchestrator | 2026-04-17 01:11:34.892905 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-17 01:11:34.892908 | orchestrator | Friday 17 April 2026 01:08:54 +0000 (0:00:00.634) 0:02:11.839 ********** 2026-04-17 01:11:34.892912 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:11:34.892915 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892918 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:11:34.892921 | orchestrator | 2026-04-17 01:11:34.892924 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.892927 | orchestrator | Friday 17 April 2026 01:08:57 +0000 (0:00:02.832) 0:02:14.672 ********** 2026-04-17 01:11:34.892930 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:11:34.892933 | orchestrator | 2026-04-17 01:11:34.892936 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-17 01:11:34.892939 | orchestrator | Friday 17 April 2026 01:08:58 +0000 (0:00:00.666) 0:02:15.338 ********** 2026-04-17 01:11:34.892942 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892945 | orchestrator | 2026-04-17 01:11:34.892948 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-17 01:11:34.892951 | orchestrator | Friday 17 April 2026 01:09:02 +0000 (0:00:04.285) 0:02:19.624 ********** 2026-04-17 01:11:34.892954 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.892957 | orchestrator | 2026-04-17 01:11:34.892960 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-17 01:11:34.892964 | orchestrator | Friday 17 April 2026 01:09:05 +0000 (0:00:03.558) 0:02:23.182 ********** 2026-04-17 01:11:34.892967 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-17 01:11:34.892970 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-17 01:11:34.892973 | orchestrator | 2026-04-17 01:11:34.893003 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-17 01:11:34.893027 | orchestrator | Friday 17 April 2026 01:09:13 +0000 (0:00:07.338) 0:02:30.521 ********** 2026-04-17 01:11:34.893034 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.893038 | orchestrator | 2026-04-17 01:11:34.893041 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-17 01:11:34.893044 | orchestrator | Friday 17 April 2026 01:09:16 +0000 (0:00:03.567) 0:02:34.088 ********** 2026-04-17 01:11:34.893047 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:11:34.893050 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:11:34.893053 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:11:34.893058 | orchestrator | 2026-04-17 01:11:34.893063 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-17 01:11:34.893068 | orchestrator | Friday 17 April 2026 01:09:17 +0000 (0:00:00.287) 0:02:34.375 ********** 2026-04-17 01:11:34.893076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.893097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.893103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.893108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.893114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.893120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.893128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.893562 | orchestrator | 2026-04-17 01:11:34.893568 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-17 01:11:34.893573 | orchestrator | Friday 17 April 2026 01:09:19 +0000 (0:00:02.732) 0:02:37.108 ********** 2026-04-17 01:11:34.893576 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.893580 | orchestrator | 2026-04-17 01:11:34.893599 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-17 01:11:34.893603 | orchestrator | Friday 17 April 2026 01:09:20 +0000 (0:00:00.148) 0:02:37.257 ********** 2026-04-17 01:11:34.893607 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.893611 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:11:34.893614 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:11:34.893618 | orchestrator | 2026-04-17 01:11:34.893621 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-17 01:11:34.893625 | orchestrator | Friday 17 April 2026 01:09:20 +0000 (0:00:00.270) 0:02:37.528 ********** 2026-04-17 01:11:34.893629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.893634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894472 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.894502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.894507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894525 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:11:34.894530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.894544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894562 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:11:34.894565 | orchestrator | 2026-04-17 01:11:34.894569 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.894573 | orchestrator | Friday 17 April 2026 01:09:20 +0000 (0:00:00.659) 0:02:38.187 ********** 2026-04-17 01:11:34.894577 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:11:34.894581 | orchestrator | 2026-04-17 01:11:34.894584 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-17 01:11:34.894588 | orchestrator | Friday 17 April 2026 01:09:21 +0000 (0:00:00.657) 0:02:38.844 ********** 2026-04-17 01:11:34.894594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.894607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.894612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.894618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.894622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.894626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.894632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.894674 | orchestrator | 2026-04-17 01:11:34.894678 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-17 01:11:34.894681 | orchestrator | Friday 17 April 2026 01:09:26 +0000 (0:00:05.101) 0:02:43.945 ********** 2026-04-17 01:11:34.894685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.894691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894707 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.894713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.894717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894734 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:11:34.894739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.894743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.894749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.894760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.894764 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:11:34.894767 | orchestrator | 2026-04-17 01:11:34.894771 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-17 01:11:34.894774 | orchestrator | Friday 17 April 2026 01:09:27 +0000 (0:00:00.656) 0:02:44.601 ********** 2026-04-17 01:11:34.894778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.895516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.895542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.895577 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.895583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.895589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.895595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.895624 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:11:34.895628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-17 01:11:34.895631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-17 01:11:34.895634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-17 01:11:34.895643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-17 01:11:34.895649 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:11:34.895652 | orchestrator | 2026-04-17 01:11:34.895655 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-17 01:11:34.895658 | orchestrator | Friday 17 April 2026 01:09:28 +0000 (0:00:00.983) 0:02:45.585 ********** 2026-04-17 01:11:34.895664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895730 | orchestrator | 2026-04-17 01:11:34.895733 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-17 01:11:34.895737 | orchestrator | Friday 17 April 2026 01:09:33 +0000 (0:00:05.282) 0:02:50.868 ********** 2026-04-17 01:11:34.895740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 01:11:34.895744 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 01:11:34.895747 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-17 01:11:34.895750 | orchestrator | 2026-04-17 01:11:34.895753 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-17 01:11:34.895756 | orchestrator | Friday 17 April 2026 01:09:35 +0000 (0:00:01.636) 0:02:52.504 ********** 2026-04-17 01:11:34.895759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.895777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.895787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.895853 | orchestrator | 2026-04-17 01:11:34.895856 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-17 01:11:34.895859 | orchestrator | Friday 17 April 2026 01:09:51 +0000 (0:00:16.059) 0:03:08.564 ********** 2026-04-17 01:11:34.895863 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.895866 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.895869 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.895872 | orchestrator | 2026-04-17 01:11:34.895875 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-17 01:11:34.895878 | orchestrator | Friday 17 April 2026 01:09:53 +0000 (0:00:01.932) 0:03:10.497 ********** 2026-04-17 01:11:34.895881 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895884 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895889 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895892 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895895 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895899 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895902 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895905 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895908 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895911 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895914 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895917 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895920 | orchestrator | 2026-04-17 01:11:34.895923 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-17 01:11:34.895926 | orchestrator | Friday 17 April 2026 01:09:58 +0000 (0:00:05.034) 0:03:15.531 ********** 2026-04-17 01:11:34.895929 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895932 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895935 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.895940 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895945 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895950 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.895954 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895960 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895965 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.895974 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895979 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895984 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 01:11:34.895989 | orchestrator | 2026-04-17 01:11:34.895993 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-17 01:11:34.895998 | orchestrator | Friday 17 April 2026 01:10:03 +0000 (0:00:05.146) 0:03:20.678 ********** 2026-04-17 01:11:34.896003 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.896008 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.896013 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-17 01:11:34.896018 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.896023 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.896028 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-17 01:11:34.896033 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.896038 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.896043 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-17 01:11:34.896048 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-17 01:11:34.896053 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-17 01:11:34.896058 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-17 01:11:34.896064 | orchestrator | 2026-04-17 01:11:34.896069 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-17 01:11:34.896074 | orchestrator | Friday 17 April 2026 01:10:08 +0000 (0:00:05.254) 0:03:25.932 ********** 2026-04-17 01:11:34.896083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.896093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.896099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-17 01:11:34.896110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.896116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.896122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-17 01:11:34.896129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-17 01:11:34.896206 | orchestrator | 2026-04-17 01:11:34.896211 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-17 01:11:34.896220 | orchestrator | Friday 17 April 2026 01:10:12 +0000 (0:00:04.007) 0:03:29.940 ********** 2026-04-17 01:11:34.896226 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:11:34.896231 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:11:34.896236 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:11:34.896241 | orchestrator | 2026-04-17 01:11:34.896244 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-17 01:11:34.896247 | orchestrator | Friday 17 April 2026 01:10:13 +0000 (0:00:00.446) 0:03:30.387 ********** 2026-04-17 01:11:34.896250 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896253 | orchestrator | 2026-04-17 01:11:34.896256 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-17 01:11:34.896259 | orchestrator | Friday 17 April 2026 01:10:15 +0000 (0:00:02.264) 0:03:32.651 ********** 2026-04-17 01:11:34.896262 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896265 | orchestrator | 2026-04-17 01:11:34.896268 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-17 01:11:34.896271 | orchestrator | Friday 17 April 2026 01:10:17 +0000 (0:00:02.320) 0:03:34.972 ********** 2026-04-17 01:11:34.896274 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896277 | orchestrator | 2026-04-17 01:11:34.896280 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-17 01:11:34.896283 | orchestrator | Friday 17 April 2026 01:10:20 +0000 (0:00:02.687) 0:03:37.660 ********** 2026-04-17 01:11:34.896286 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896289 | orchestrator | 2026-04-17 01:11:34.896292 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-17 01:11:34.896295 | orchestrator | Friday 17 April 2026 01:10:22 +0000 (0:00:02.518) 0:03:40.178 ********** 2026-04-17 01:11:34.896299 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896302 | orchestrator | 2026-04-17 01:11:34.896305 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 01:11:34.896308 | orchestrator | Friday 17 April 2026 01:10:45 +0000 (0:00:22.155) 0:04:02.333 ********** 2026-04-17 01:11:34.896311 | orchestrator | 2026-04-17 01:11:34.896314 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 01:11:34.896317 | orchestrator | Friday 17 April 2026 01:10:45 +0000 (0:00:00.067) 0:04:02.401 ********** 2026-04-17 01:11:34.896320 | orchestrator | 2026-04-17 01:11:34.896323 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-17 01:11:34.896326 | orchestrator | Friday 17 April 2026 01:10:45 +0000 (0:00:00.063) 0:04:02.465 ********** 2026-04-17 01:11:34.896329 | orchestrator | 2026-04-17 01:11:34.896332 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-17 01:11:34.896335 | orchestrator | Friday 17 April 2026 01:10:45 +0000 (0:00:00.063) 0:04:02.529 ********** 2026-04-17 01:11:34.896338 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896341 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.896344 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.896347 | orchestrator | 2026-04-17 01:11:34.896350 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-17 01:11:34.896353 | orchestrator | Friday 17 April 2026 01:10:56 +0000 (0:00:10.981) 0:04:13.510 ********** 2026-04-17 01:11:34.896356 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.896359 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.896362 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896365 | orchestrator | 2026-04-17 01:11:34.896368 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-17 01:11:34.896372 | orchestrator | Friday 17 April 2026 01:11:04 +0000 (0:00:08.379) 0:04:21.890 ********** 2026-04-17 01:11:34.896375 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896378 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.896381 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.896384 | orchestrator | 2026-04-17 01:11:34.896387 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-17 01:11:34.896392 | orchestrator | Friday 17 April 2026 01:11:10 +0000 (0:00:05.558) 0:04:27.448 ********** 2026-04-17 01:11:34.896395 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896398 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.896401 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.896404 | orchestrator | 2026-04-17 01:11:34.896407 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-17 01:11:34.896410 | orchestrator | Friday 17 April 2026 01:11:21 +0000 (0:00:10.794) 0:04:38.242 ********** 2026-04-17 01:11:34.896413 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:11:34.896416 | orchestrator | changed: [testbed-node-2] 2026-04-17 01:11:34.896420 | orchestrator | changed: [testbed-node-1] 2026-04-17 01:11:34.896422 | orchestrator | 2026-04-17 01:11:34.896428 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:11:34.896431 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-17 01:11:34.896435 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:11:34.896438 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-17 01:11:34.896441 | orchestrator | 2026-04-17 01:11:34.896444 | orchestrator | 2026-04-17 01:11:34.896447 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:11:34.896450 | orchestrator | Friday 17 April 2026 01:11:34 +0000 (0:00:13.254) 0:04:51.496 ********** 2026-04-17 01:11:34.896455 | orchestrator | =============================================================================== 2026-04-17 01:11:34.896458 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.16s 2026-04-17 01:11:34.896461 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.71s 2026-04-17 01:11:34.896464 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.63s 2026-04-17 01:11:34.896467 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.06s 2026-04-17 01:11:34.896471 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 13.25s 2026-04-17 01:11:34.896474 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.14s 2026-04-17 01:11:34.896477 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.98s 2026-04-17 01:11:34.896480 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.79s 2026-04-17 01:11:34.896483 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.78s 2026-04-17 01:11:34.896486 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.38s 2026-04-17 01:11:34.896489 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.08s 2026-04-17 01:11:34.896492 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.34s 2026-04-17 01:11:34.896495 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.30s 2026-04-17 01:11:34.896498 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.19s 2026-04-17 01:11:34.896501 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.59s 2026-04-17 01:11:34.896504 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.56s 2026-04-17 01:11:34.896507 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.28s 2026-04-17 01:11:34.896510 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.25s 2026-04-17 01:11:34.896513 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.15s 2026-04-17 01:11:34.896516 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.10s 2026-04-17 01:11:34.896519 | orchestrator | 2026-04-17 01:11:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:37.933929 | orchestrator | 2026-04-17 01:11:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:40.978835 | orchestrator | 2026-04-17 01:11:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:44.051456 | orchestrator | 2026-04-17 01:11:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:47.093725 | orchestrator | 2026-04-17 01:11:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:50.134211 | orchestrator | 2026-04-17 01:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:53.174709 | orchestrator | 2026-04-17 01:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:56.217496 | orchestrator | 2026-04-17 01:11:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:11:59.260300 | orchestrator | 2026-04-17 01:11:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:02.301606 | orchestrator | 2026-04-17 01:12:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:05.345277 | orchestrator | 2026-04-17 01:12:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:08.384527 | orchestrator | 2026-04-17 01:12:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:11.427064 | orchestrator | 2026-04-17 01:12:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:14.470286 | orchestrator | 2026-04-17 01:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:17.511306 | orchestrator | 2026-04-17 01:12:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:20.552546 | orchestrator | 2026-04-17 01:12:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:23.596027 | orchestrator | 2026-04-17 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:26.636807 | orchestrator | 2026-04-17 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:29.673717 | orchestrator | 2026-04-17 01:12:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:32.715443 | orchestrator | 2026-04-17 01:12:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-17 01:12:35.751788 | orchestrator | 2026-04-17 01:12:35.934899 | orchestrator | 2026-04-17 01:12:35.942341 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Apr 17 01:12:35 UTC 2026 2026-04-17 01:12:35.942455 | orchestrator | 2026-04-17 01:12:36.295963 | orchestrator | ok: Runtime: 0:32:03.232852 2026-04-17 01:12:36.559182 | 2026-04-17 01:12:36.559334 | TASK [Bootstrap services] 2026-04-17 01:12:37.340215 | orchestrator | 2026-04-17 01:12:37.340371 | orchestrator | # BOOTSTRAP 2026-04-17 01:12:37.340383 | orchestrator | 2026-04-17 01:12:37.340389 | orchestrator | + set -e 2026-04-17 01:12:37.340395 | orchestrator | + echo 2026-04-17 01:12:37.340401 | orchestrator | + echo '# BOOTSTRAP' 2026-04-17 01:12:37.340409 | orchestrator | + echo 2026-04-17 01:12:37.340432 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-17 01:12:37.349689 | orchestrator | + set -e 2026-04-17 01:12:37.349772 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-17 01:12:41.711489 | orchestrator | 2026-04-17 01:12:41 | INFO  | It takes a moment until task 50204722-56a8-4933-a7bc-f528736162ff (flavor-manager) has been started and output is visible here. 2026-04-17 01:12:51.520430 | orchestrator | 2026-04-17 01:12:46 | INFO  | Flavor SCS-1L-1 created 2026-04-17 01:12:51.520540 | orchestrator | 2026-04-17 01:12:46 | INFO  | Flavor SCS-1L-1-5 created 2026-04-17 01:12:51.520555 | orchestrator | 2026-04-17 01:12:46 | INFO  | Flavor SCS-1V-2 created 2026-04-17 01:12:51.520563 | orchestrator | 2026-04-17 01:12:47 | INFO  | Flavor SCS-1V-2-5 created 2026-04-17 01:12:51.520569 | orchestrator | 2026-04-17 01:12:47 | INFO  | Flavor SCS-1V-4 created 2026-04-17 01:12:51.520576 | orchestrator | 2026-04-17 01:12:47 | INFO  | Flavor SCS-1V-4-10 created 2026-04-17 01:12:51.520583 | orchestrator | 2026-04-17 01:12:47 | INFO  | Flavor SCS-1V-8 created 2026-04-17 01:12:51.520590 | orchestrator | 2026-04-17 01:12:47 | INFO  | Flavor SCS-1V-8-20 created 2026-04-17 01:12:51.520604 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-4 created 2026-04-17 01:12:51.520611 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-4-10 created 2026-04-17 01:12:51.520618 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-8 created 2026-04-17 01:12:51.520624 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-8-20 created 2026-04-17 01:12:51.520631 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-16 created 2026-04-17 01:12:51.520637 | orchestrator | 2026-04-17 01:12:48 | INFO  | Flavor SCS-2V-16-50 created 2026-04-17 01:12:51.520644 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-8 created 2026-04-17 01:12:51.520650 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-8-20 created 2026-04-17 01:12:51.520657 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-16 created 2026-04-17 01:12:51.520664 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-16-50 created 2026-04-17 01:12:51.520670 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-32 created 2026-04-17 01:12:51.520674 | orchestrator | 2026-04-17 01:12:49 | INFO  | Flavor SCS-4V-32-100 created 2026-04-17 01:12:51.520678 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-8V-16 created 2026-04-17 01:12:51.520682 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-8V-16-50 created 2026-04-17 01:12:51.520687 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-8V-32 created 2026-04-17 01:12:51.520690 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-8V-32-100 created 2026-04-17 01:12:51.520694 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-16V-32 created 2026-04-17 01:12:51.520698 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-16V-32-100 created 2026-04-17 01:12:51.520702 | orchestrator | 2026-04-17 01:12:50 | INFO  | Flavor SCS-2V-4-20s created 2026-04-17 01:12:51.520706 | orchestrator | 2026-04-17 01:12:51 | INFO  | Flavor SCS-4V-8-50s created 2026-04-17 01:12:51.520710 | orchestrator | 2026-04-17 01:12:51 | INFO  | Flavor SCS-4V-16-100s created 2026-04-17 01:12:51.520714 | orchestrator | 2026-04-17 01:12:51 | INFO  | Flavor SCS-8V-32-100s created 2026-04-17 01:12:53.070472 | orchestrator | 2026-04-17 01:12:53 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-17 01:13:03.138363 | orchestrator | 2026-04-17 01:13:03 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-17 01:13:03.208831 | orchestrator | 2026-04-17 01:13:03 | INFO  | Task 95716f37-6fb1-40e2-ae23-e89ad57f240c (bootstrap-basic) was prepared for execution. 2026-04-17 01:13:03.208917 | orchestrator | 2026-04-17 01:13:03 | INFO  | It takes a moment until task 95716f37-6fb1-40e2-ae23-e89ad57f240c (bootstrap-basic) has been started and output is visible here. 2026-04-17 01:13:48.843789 | orchestrator | 2026-04-17 01:13:48.843883 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-17 01:13:48.843893 | orchestrator | 2026-04-17 01:13:48.843901 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-17 01:13:48.843910 | orchestrator | Friday 17 April 2026 01:13:06 +0000 (0:00:00.107) 0:00:00.107 ********** 2026-04-17 01:13:48.843919 | orchestrator | ok: [localhost] 2026-04-17 01:13:48.843936 | orchestrator | 2026-04-17 01:13:48.843949 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-17 01:13:48.843956 | orchestrator | Friday 17 April 2026 01:13:08 +0000 (0:00:01.915) 0:00:02.022 ********** 2026-04-17 01:13:48.843964 | orchestrator | ok: [localhost] 2026-04-17 01:13:48.843970 | orchestrator | 2026-04-17 01:13:48.843975 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-17 01:13:48.843982 | orchestrator | Friday 17 April 2026 01:13:16 +0000 (0:00:08.368) 0:00:10.390 ********** 2026-04-17 01:13:48.843988 | orchestrator | changed: [localhost] 2026-04-17 01:13:48.843994 | orchestrator | 2026-04-17 01:13:48.844000 | orchestrator | TASK [Create public network] *************************************************** 2026-04-17 01:13:48.844006 | orchestrator | Friday 17 April 2026 01:13:24 +0000 (0:00:08.176) 0:00:18.567 ********** 2026-04-17 01:13:48.844012 | orchestrator | changed: [localhost] 2026-04-17 01:13:48.844019 | orchestrator | 2026-04-17 01:13:48.844028 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-17 01:13:48.844036 | orchestrator | Friday 17 April 2026 01:13:30 +0000 (0:00:05.830) 0:00:24.397 ********** 2026-04-17 01:13:48.844042 | orchestrator | changed: [localhost] 2026-04-17 01:13:48.844048 | orchestrator | 2026-04-17 01:13:48.844055 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-17 01:13:48.844062 | orchestrator | Friday 17 April 2026 01:13:36 +0000 (0:00:06.214) 0:00:30.612 ********** 2026-04-17 01:13:48.844069 | orchestrator | changed: [localhost] 2026-04-17 01:13:48.844075 | orchestrator | 2026-04-17 01:13:48.844082 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-17 01:13:48.844089 | orchestrator | Friday 17 April 2026 01:13:41 +0000 (0:00:04.153) 0:00:34.766 ********** 2026-04-17 01:13:48.844094 | orchestrator | changed: [localhost] 2026-04-17 01:13:48.844098 | orchestrator | 2026-04-17 01:13:48.844102 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-17 01:13:48.844115 | orchestrator | Friday 17 April 2026 01:13:45 +0000 (0:00:03.990) 0:00:38.756 ********** 2026-04-17 01:13:48.844119 | orchestrator | ok: [localhost] 2026-04-17 01:13:48.844123 | orchestrator | 2026-04-17 01:13:48.844127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:13:48.844132 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-17 01:13:48.844136 | orchestrator | 2026-04-17 01:13:48.844140 | orchestrator | 2026-04-17 01:13:48.844144 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:13:48.844148 | orchestrator | Friday 17 April 2026 01:13:48 +0000 (0:00:03.617) 0:00:42.374 ********** 2026-04-17 01:13:48.844152 | orchestrator | =============================================================================== 2026-04-17 01:13:48.844156 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.37s 2026-04-17 01:13:48.844235 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.18s 2026-04-17 01:13:48.844241 | orchestrator | Set public network to default ------------------------------------------- 6.21s 2026-04-17 01:13:48.844245 | orchestrator | Create public network --------------------------------------------------- 5.83s 2026-04-17 01:13:48.844249 | orchestrator | Create public subnet ---------------------------------------------------- 4.15s 2026-04-17 01:13:48.844253 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.99s 2026-04-17 01:13:48.844257 | orchestrator | Create manager role ----------------------------------------------------- 3.62s 2026-04-17 01:13:48.844261 | orchestrator | Gathering Facts --------------------------------------------------------- 1.92s 2026-04-17 01:13:50.784875 | orchestrator | 2026-04-17 01:13:50 | INFO  | It takes a moment until task c9356f0e-ee7c-4bed-ad02-5391e8a80c03 (image-manager) has been started and output is visible here. 2026-04-17 01:14:35.139115 | orchestrator | 2026-04-17 01:13:53 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-17 01:14:35.139258 | orchestrator | 2026-04-17 01:13:53 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-17 01:14:35.139278 | orchestrator | 2026-04-17 01:13:53 | INFO  | Importing image Cirros 0.6.2 2026-04-17 01:14:35.139286 | orchestrator | 2026-04-17 01:13:53 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-17 01:14:35.139296 | orchestrator | 2026-04-17 01:13:56 | INFO  | Waiting for image to leave queued state... 2026-04-17 01:14:35.139305 | orchestrator | 2026-04-17 01:13:58 | INFO  | Waiting for import to complete... 2026-04-17 01:14:35.139314 | orchestrator | 2026-04-17 01:14:08 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-17 01:14:35.139323 | orchestrator | 2026-04-17 01:14:08 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-17 01:14:35.139331 | orchestrator | 2026-04-17 01:14:08 | INFO  | Setting internal_version = 0.6.2 2026-04-17 01:14:35.139340 | orchestrator | 2026-04-17 01:14:08 | INFO  | Setting image_original_user = cirros 2026-04-17 01:14:35.139348 | orchestrator | 2026-04-17 01:14:08 | INFO  | Adding tag os:cirros 2026-04-17 01:14:35.139357 | orchestrator | 2026-04-17 01:14:09 | INFO  | Setting property architecture: x86_64 2026-04-17 01:14:35.139365 | orchestrator | 2026-04-17 01:14:09 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 01:14:35.139374 | orchestrator | 2026-04-17 01:14:09 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 01:14:35.139384 | orchestrator | 2026-04-17 01:14:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 01:14:35.139389 | orchestrator | 2026-04-17 01:14:10 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 01:14:35.139394 | orchestrator | 2026-04-17 01:14:10 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 01:14:35.139408 | orchestrator | 2026-04-17 01:14:10 | INFO  | Setting property os_distro: cirros 2026-04-17 01:14:35.139414 | orchestrator | 2026-04-17 01:14:10 | INFO  | Setting property os_purpose: minimal 2026-04-17 01:14:35.139419 | orchestrator | 2026-04-17 01:14:11 | INFO  | Setting property replace_frequency: never 2026-04-17 01:14:35.139424 | orchestrator | 2026-04-17 01:14:11 | INFO  | Setting property uuid_validity: none 2026-04-17 01:14:35.139429 | orchestrator | 2026-04-17 01:14:11 | INFO  | Setting property provided_until: none 2026-04-17 01:14:35.139434 | orchestrator | 2026-04-17 01:14:11 | INFO  | Setting property image_description: Cirros 2026-04-17 01:14:35.139439 | orchestrator | 2026-04-17 01:14:12 | INFO  | Setting property image_name: Cirros 2026-04-17 01:14:35.139462 | orchestrator | 2026-04-17 01:14:12 | INFO  | Setting property internal_version: 0.6.2 2026-04-17 01:14:35.139467 | orchestrator | 2026-04-17 01:14:12 | INFO  | Setting property image_original_user: cirros 2026-04-17 01:14:35.139472 | orchestrator | 2026-04-17 01:14:12 | INFO  | Setting property os_version: 0.6.2 2026-04-17 01:14:35.139478 | orchestrator | 2026-04-17 01:14:13 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-17 01:14:35.139484 | orchestrator | 2026-04-17 01:14:13 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-17 01:14:35.139489 | orchestrator | 2026-04-17 01:14:13 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-17 01:14:35.139494 | orchestrator | 2026-04-17 01:14:13 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-17 01:14:35.139502 | orchestrator | 2026-04-17 01:14:13 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-17 01:14:35.139508 | orchestrator | 2026-04-17 01:14:14 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-17 01:14:35.139513 | orchestrator | 2026-04-17 01:14:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-17 01:14:35.139518 | orchestrator | 2026-04-17 01:14:14 | INFO  | Importing image Cirros 0.6.3 2026-04-17 01:14:35.139523 | orchestrator | 2026-04-17 01:14:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-17 01:14:35.139528 | orchestrator | 2026-04-17 01:14:16 | INFO  | Waiting for image to leave queued state... 2026-04-17 01:14:35.139532 | orchestrator | 2026-04-17 01:14:18 | INFO  | Waiting for import to complete... 2026-04-17 01:14:35.139550 | orchestrator | 2026-04-17 01:14:28 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-17 01:14:35.139556 | orchestrator | 2026-04-17 01:14:28 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-17 01:14:35.139561 | orchestrator | 2026-04-17 01:14:28 | INFO  | Setting internal_version = 0.6.3 2026-04-17 01:14:35.139565 | orchestrator | 2026-04-17 01:14:28 | INFO  | Setting image_original_user = cirros 2026-04-17 01:14:35.139570 | orchestrator | 2026-04-17 01:14:28 | INFO  | Adding tag os:cirros 2026-04-17 01:14:35.139575 | orchestrator | 2026-04-17 01:14:29 | INFO  | Setting property architecture: x86_64 2026-04-17 01:14:35.139580 | orchestrator | 2026-04-17 01:14:29 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 01:14:35.139585 | orchestrator | 2026-04-17 01:14:29 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 01:14:35.139590 | orchestrator | 2026-04-17 01:14:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 01:14:35.139595 | orchestrator | 2026-04-17 01:14:30 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 01:14:35.139599 | orchestrator | 2026-04-17 01:14:30 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 01:14:35.139604 | orchestrator | 2026-04-17 01:14:30 | INFO  | Setting property os_distro: cirros 2026-04-17 01:14:35.139609 | orchestrator | 2026-04-17 01:14:30 | INFO  | Setting property os_purpose: minimal 2026-04-17 01:14:35.139614 | orchestrator | 2026-04-17 01:14:31 | INFO  | Setting property replace_frequency: never 2026-04-17 01:14:35.139619 | orchestrator | 2026-04-17 01:14:31 | INFO  | Setting property uuid_validity: none 2026-04-17 01:14:35.139624 | orchestrator | 2026-04-17 01:14:31 | INFO  | Setting property provided_until: none 2026-04-17 01:14:35.139629 | orchestrator | 2026-04-17 01:14:31 | INFO  | Setting property image_description: Cirros 2026-04-17 01:14:35.139638 | orchestrator | 2026-04-17 01:14:32 | INFO  | Setting property image_name: Cirros 2026-04-17 01:14:35.139643 | orchestrator | 2026-04-17 01:14:32 | INFO  | Setting property internal_version: 0.6.3 2026-04-17 01:14:35.139648 | orchestrator | 2026-04-17 01:14:32 | INFO  | Setting property image_original_user: cirros 2026-04-17 01:14:35.139653 | orchestrator | 2026-04-17 01:14:32 | INFO  | Setting property os_version: 0.6.3 2026-04-17 01:14:35.139658 | orchestrator | 2026-04-17 01:14:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-17 01:14:35.139664 | orchestrator | 2026-04-17 01:14:33 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-17 01:14:35.139670 | orchestrator | 2026-04-17 01:14:33 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-17 01:14:35.139675 | orchestrator | 2026-04-17 01:14:33 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-17 01:14:35.139681 | orchestrator | 2026-04-17 01:14:33 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-17 01:14:35.362531 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-17 01:14:37.514090 | orchestrator | 2026-04-17 01:14:37 | INFO  | date: 2026-04-16 2026-04-17 01:14:37.514205 | orchestrator | 2026-04-17 01:14:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-17 01:14:37.514251 | orchestrator | 2026-04-17 01:14:37 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-17 01:14:37.514272 | orchestrator | 2026-04-17 01:14:37 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2.CHECKSUM 2026-04-17 01:14:37.781748 | orchestrator | 2026-04-17 01:14:37 | INFO  | checksum: d0860f46848f6ee8ed337cc33d5ba7e96db2ef81fcfd28d6d9ee3a3b596108d8 2026-04-17 01:14:37.862912 | orchestrator | 2026-04-17 01:14:37 | INFO  | It takes a moment until task 31a651b2-63f5-48e1-9c4e-baa7dc0e7811 (image-manager) has been started and output is visible here. 2026-04-17 01:15:40.557713 | orchestrator | 2026-04-17 01:14:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-16' 2026-04-17 01:15:40.557830 | orchestrator | 2026-04-17 01:14:40 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2: 200 2026-04-17 01:15:40.557843 | orchestrator | 2026-04-17 01:14:40 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-16 2026-04-17 01:15:40.557850 | orchestrator | 2026-04-17 01:14:40 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-17 01:15:40.557858 | orchestrator | 2026-04-17 01:14:42 | INFO  | Waiting for image to leave queued state... 2026-04-17 01:15:40.557864 | orchestrator | 2026-04-17 01:14:44 | INFO  | Waiting for import to complete... 2026-04-17 01:15:40.557871 | orchestrator | 2026-04-17 01:14:54 | INFO  | Waiting for import to complete... 2026-04-17 01:15:40.557876 | orchestrator | 2026-04-17 01:15:04 | INFO  | Waiting for import to complete... 2026-04-17 01:15:40.557917 | orchestrator | 2026-04-17 01:15:14 | INFO  | Waiting for import to complete... 2026-04-17 01:15:40.557927 | orchestrator | 2026-04-17 01:15:24 | INFO  | Waiting for import to complete... 2026-04-17 01:15:40.557940 | orchestrator | 2026-04-17 01:15:34 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-16' successfully completed, reloading images 2026-04-17 01:15:40.557967 | orchestrator | 2026-04-17 01:15:35 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-17 01:15:40.557974 | orchestrator | 2026-04-17 01:15:35 | INFO  | Setting internal_version = 2026-04-16 2026-04-17 01:15:40.557980 | orchestrator | 2026-04-17 01:15:35 | INFO  | Setting image_original_user = ubuntu 2026-04-17 01:15:40.557987 | orchestrator | 2026-04-17 01:15:35 | INFO  | Adding tag amphora 2026-04-17 01:15:40.557994 | orchestrator | 2026-04-17 01:15:35 | INFO  | Adding tag os:ubuntu 2026-04-17 01:15:40.558000 | orchestrator | 2026-04-17 01:15:35 | INFO  | Setting property architecture: x86_64 2026-04-17 01:15:40.558006 | orchestrator | 2026-04-17 01:15:35 | INFO  | Setting property hw_disk_bus: scsi 2026-04-17 01:15:40.558011 | orchestrator | 2026-04-17 01:15:36 | INFO  | Setting property hw_rng_model: virtio 2026-04-17 01:15:40.558049 | orchestrator | 2026-04-17 01:15:36 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-17 01:15:40.558055 | orchestrator | 2026-04-17 01:15:36 | INFO  | Setting property hw_watchdog_action: reset 2026-04-17 01:15:40.558061 | orchestrator | 2026-04-17 01:15:36 | INFO  | Setting property hypervisor_type: qemu 2026-04-17 01:15:40.558067 | orchestrator | 2026-04-17 01:15:37 | INFO  | Setting property os_distro: ubuntu 2026-04-17 01:15:40.558073 | orchestrator | 2026-04-17 01:15:37 | INFO  | Setting property replace_frequency: quarterly 2026-04-17 01:15:40.558078 | orchestrator | 2026-04-17 01:15:37 | INFO  | Setting property uuid_validity: last-1 2026-04-17 01:15:40.558084 | orchestrator | 2026-04-17 01:15:37 | INFO  | Setting property provided_until: none 2026-04-17 01:15:40.558090 | orchestrator | 2026-04-17 01:15:38 | INFO  | Setting property os_purpose: network 2026-04-17 01:15:40.558096 | orchestrator | 2026-04-17 01:15:38 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-17 01:15:40.558115 | orchestrator | 2026-04-17 01:15:38 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-17 01:15:40.558121 | orchestrator | 2026-04-17 01:15:38 | INFO  | Setting property internal_version: 2026-04-16 2026-04-17 01:15:40.558127 | orchestrator | 2026-04-17 01:15:39 | INFO  | Setting property image_original_user: ubuntu 2026-04-17 01:15:40.558132 | orchestrator | 2026-04-17 01:15:39 | INFO  | Setting property os_version: 2026-04-16 2026-04-17 01:15:40.558139 | orchestrator | 2026-04-17 01:15:39 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-17 01:15:40.558145 | orchestrator | 2026-04-17 01:15:39 | INFO  | Setting property image_build_date: 2026-04-16 2026-04-17 01:15:40.558151 | orchestrator | 2026-04-17 01:15:40 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-17 01:15:40.558161 | orchestrator | 2026-04-17 01:15:40 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-17 01:15:40.558170 | orchestrator | 2026-04-17 01:15:40 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-17 01:15:40.558253 | orchestrator | 2026-04-17 01:15:40 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-17 01:15:40.558267 | orchestrator | 2026-04-17 01:15:40 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-17 01:15:40.558275 | orchestrator | 2026-04-17 01:15:40 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-17 01:15:41.219848 | orchestrator | ok: Runtime: 0:03:03.869331 2026-04-17 01:15:41.245718 | 2026-04-17 01:15:41.245884 | TASK [Run checks] 2026-04-17 01:15:41.968160 | orchestrator | + set -e 2026-04-17 01:15:41.968382 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 01:15:41.968410 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 01:15:41.968432 | orchestrator | ++ INTERACTIVE=false 2026-04-17 01:15:41.968449 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 01:15:41.968463 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 01:15:41.968478 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 01:15:41.969177 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 01:15:41.975300 | orchestrator | 2026-04-17 01:15:41.975403 | orchestrator | # CHECK 2026-04-17 01:15:41.975420 | orchestrator | 2026-04-17 01:15:41.975434 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:15:41.975468 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:15:41.975491 | orchestrator | + echo 2026-04-17 01:15:41.975505 | orchestrator | + echo '# CHECK' 2026-04-17 01:15:41.975517 | orchestrator | + echo 2026-04-17 01:15:41.975535 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 01:15:41.976576 | orchestrator | ++ semver latest 5.0.0 2026-04-17 01:15:42.035834 | orchestrator | 2026-04-17 01:15:42.035925 | orchestrator | ## Containers @ testbed-manager 2026-04-17 01:15:42.035937 | orchestrator | 2026-04-17 01:15:42.035946 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-17 01:15:42.035953 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 01:15:42.035960 | orchestrator | + echo 2026-04-17 01:15:42.035967 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-17 01:15:42.035975 | orchestrator | + echo 2026-04-17 01:15:42.035982 | orchestrator | + osism container testbed-manager ps 2026-04-17 01:15:43.111334 | orchestrator | 2026-04-17 01:15:43 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-17 01:15:43.475270 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 01:15:43.475376 | orchestrator | 109eb9bf7392 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-17 01:15:43.475395 | orchestrator | 5049a0ba74db registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-04-17 01:15:43.475404 | orchestrator | a91254eb2871 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-17 01:15:43.475408 | orchestrator | e5945fbee942 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-17 01:15:43.475414 | orchestrator | 789f70927627 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-17 01:15:43.475418 | orchestrator | 164906bbd049 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-04-17 01:15:43.475422 | orchestrator | d1d16617f78c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-17 01:15:43.475426 | orchestrator | 801a8f3555f7 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-17 01:15:43.475449 | orchestrator | 3b34ba5a8328 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes fluentd 2026-04-17 01:15:43.475453 | orchestrator | a289e404a8ea phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-17 01:15:43.475457 | orchestrator | 604e763cc796 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2026-04-17 01:15:43.475461 | orchestrator | caad6fcd53b6 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-17 01:15:43.475465 | orchestrator | b82dee853bd1 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-17 01:15:43.475470 | orchestrator | 546240ea7876 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-17 01:15:43.475474 | orchestrator | 8e3e3330ce0b registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-04-17 01:15:43.475489 | orchestrator | 61963b1e2cba registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-04-17 01:15:43.475497 | orchestrator | e91e98b2824f registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-ansible 2026-04-17 01:15:43.475501 | orchestrator | b1317a45413e registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-04-17 01:15:43.475505 | orchestrator | 173c1aa8dda0 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-17 01:15:43.475508 | orchestrator | afc22cd3c23a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-17 01:15:43.475512 | orchestrator | d4224d51e4a6 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-17 01:15:43.475516 | orchestrator | df0d1fad80d7 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-17 01:15:43.475520 | orchestrator | d7f7ee90b9a5 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-17 01:15:43.475529 | orchestrator | eb015d25c52f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-17 01:15:43.475533 | orchestrator | 66ef303af709 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-17 01:15:43.475536 | orchestrator | 00cf91534733 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-17 01:15:43.475540 | orchestrator | 12371baf5a71 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-17 01:15:43.475544 | orchestrator | c0dc340aa911 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-17 01:15:43.475548 | orchestrator | 480bce2dcc73 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-17 01:15:43.605293 | orchestrator | 2026-04-17 01:15:43.605387 | orchestrator | ## Images @ testbed-manager 2026-04-17 01:15:43.605398 | orchestrator | 2026-04-17 01:15:43.605405 | orchestrator | + echo 2026-04-17 01:15:43.605412 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-17 01:15:43.605420 | orchestrator | + echo 2026-04-17 01:15:43.605431 | orchestrator | + osism container testbed-manager images 2026-04-17 01:15:44.987277 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 01:15:44.987358 | orchestrator | registry.osism.tech/osism/osism-ansible latest 64b03f39686e About an hour ago 643MB 2026-04-17 01:15:44.987367 | orchestrator | registry.osism.tech/osism/osism latest 6a17a650018b About an hour ago 410MB 2026-04-17 01:15:44.987373 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 18e420754593 About an hour ago 641MB 2026-04-17 01:15:44.987379 | orchestrator | registry.osism.tech/osism/ceph-ansible reef a8842cac940c About an hour ago 585MB 2026-04-17 01:15:44.987401 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest f50ee5c3eb17 About an hour ago 1.24GB 2026-04-17 01:15:44.987407 | orchestrator | registry.osism.tech/osism/osism-frontend latest bbd96dc576f3 About an hour ago 213MB 2026-04-17 01:15:44.987412 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 9da4db97d1a2 About an hour ago 362MB 2026-04-17 01:15:44.987417 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9e238fdcbaa6 21 hours ago 238MB 2026-04-17 01:15:44.987423 | orchestrator | registry.osism.tech/osism/cephclient reef 7e6c43c14f00 21 hours ago 453MB 2026-04-17 01:15:44.987428 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 6c9ef22543ec 23 hours ago 668MB 2026-04-17 01:15:44.987434 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 27a59bb31ea5 23 hours ago 579MB 2026-04-17 01:15:44.987439 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7931d792ed30 23 hours ago 265MB 2026-04-17 01:15:44.987444 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 3ea1c990b9ec 23 hours ago 404MB 2026-04-17 01:15:44.987450 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 bb8015c3a246 23 hours ago 306MB 2026-04-17 01:15:44.987471 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 db7d24062eb8 23 hours ago 839MB 2026-04-17 01:15:44.987477 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8fac8532c692 23 hours ago 357MB 2026-04-17 01:15:44.987482 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 e21000b938dd 23 hours ago 308MB 2026-04-17 01:15:44.987487 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-17 01:15:44.987493 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-17 01:15:44.987510 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-17 01:15:44.987515 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-17 01:15:44.987527 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-17 01:15:44.987533 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-17 01:15:44.987538 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-17 01:15:45.139518 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 01:15:45.139649 | orchestrator | ++ semver latest 5.0.0 2026-04-17 01:15:45.200112 | orchestrator | 2026-04-17 01:15:45.200244 | orchestrator | ## Containers @ testbed-node-0 2026-04-17 01:15:45.200261 | orchestrator | 2026-04-17 01:15:45.200270 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-17 01:15:45.200279 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 01:15:45.200286 | orchestrator | + echo 2026-04-17 01:15:45.200295 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-17 01:15:45.200305 | orchestrator | + echo 2026-04-17 01:15:45.200315 | orchestrator | + osism container testbed-node-0 ps 2026-04-17 01:15:46.598070 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 01:15:46.598175 | orchestrator | 56d539614230 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-17 01:15:46.598210 | orchestrator | 1fce41e3e37c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-17 01:15:46.598223 | orchestrator | 9c8412e977d0 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-17 01:15:46.598234 | orchestrator | 94caef2a6094 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-17 01:15:46.598244 | orchestrator | 3a14b69d82bb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-17 01:15:46.598254 | orchestrator | 6005be6a8219 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-17 01:15:46.598265 | orchestrator | c6e5425751f4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-17 01:15:46.598295 | orchestrator | 0fd1b8d00930 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-17 01:15:46.598306 | orchestrator | 713b115f36ce registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-17 01:15:46.598341 | orchestrator | f28b50cea10c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-17 01:15:46.598352 | orchestrator | ab50788a5e59 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-17 01:15:46.598361 | orchestrator | c5f33f74b4b4 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-17 01:15:46.598371 | orchestrator | 609714dc6a90 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2026-04-17 01:15:46.598433 | orchestrator | bca69b6c6abc registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-17 01:15:46.598446 | orchestrator | 30f8e61f80dd registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-17 01:15:46.598456 | orchestrator | 9278d0cc0f57 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-17 01:15:46.598467 | orchestrator | c23f7b73cc6e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-17 01:15:46.598477 | orchestrator | bcc8e6dab20d registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-17 01:15:46.598487 | orchestrator | 7c2eb845efd2 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-17 01:15:46.598497 | orchestrator | e47e59541903 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-17 01:15:46.598507 | orchestrator | cea1a0d14a11 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-17 01:15:46.598535 | orchestrator | f5b4aca4d03b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-17 01:15:46.598546 | orchestrator | da6b6847823a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-17 01:15:46.598557 | orchestrator | 619bdc871e0c registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-17 01:15:46.598567 | orchestrator | 758d0f591c9a registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-17 01:15:46.598583 | orchestrator | b9c81363b6e7 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-17 01:15:46.598595 | orchestrator | 749c740d1642 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-17 01:15:46.598605 | orchestrator | 0a98313cfe89 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-17 01:15:46.598624 | orchestrator | b7034068bf95 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-04-17 01:15:46.598645 | orchestrator | 224ebc92c2e7 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-17 01:15:46.598657 | orchestrator | 08ff7fe8c4ba registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-17 01:15:46.598667 | orchestrator | 1a0371f2d141 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-17 01:15:46.598678 | orchestrator | 932a2f396308 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-17 01:15:46.598688 | orchestrator | 87273e313231 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-17 01:15:46.598699 | orchestrator | a81eccca030a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-17 01:15:46.598710 | orchestrator | f567346169ff registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-17 01:15:46.598721 | orchestrator | 353f96cb1842 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-17 01:15:46.598732 | orchestrator | a6cb211d5670 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-17 01:15:46.598743 | orchestrator | 5b1fe7b3b519 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-17 01:15:46.598753 | orchestrator | 34e0573c8b3a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-17 01:15:46.598763 | orchestrator | 7d5b31f433e8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-17 01:15:46.598773 | orchestrator | 76c5ec6d8799 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-17 01:15:46.598783 | orchestrator | f8edb83a4d20 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-17 01:15:46.598793 | orchestrator | 63ee1b166e2c registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-17 01:15:46.598816 | orchestrator | 9aa9e97a1bbb registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-17 01:15:46.598827 | orchestrator | 4a6b354e653e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_northd 2026-04-17 01:15:46.598838 | orchestrator | 0d7c41f690f9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_sb_db 2026-04-17 01:15:46.598850 | orchestrator | b283b5f4bc2a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_nb_db 2026-04-17 01:15:46.598867 | orchestrator | b9acee202acb registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_controller 2026-04-17 01:15:46.598877 | orchestrator | d8565b72742c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-17 01:15:46.598888 | orchestrator | efbda5901034 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-17 01:15:46.598899 | orchestrator | 5bd2fab98fa6 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-04-17 01:15:46.598909 | orchestrator | 61a3cb06097c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-17 01:15:46.598919 | orchestrator | e6fbf7f130b9 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-17 01:15:46.598948 | orchestrator | 8f0a9bf615d0 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-17 01:15:46.598959 | orchestrator | 7c7fcdc45433 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-17 01:15:46.598969 | orchestrator | 606980e6069e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-17 01:15:46.598980 | orchestrator | 2eba4d7b850c registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-17 01:15:46.598991 | orchestrator | 570a311210d3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-17 01:15:46.732156 | orchestrator | 2026-04-17 01:15:46.732284 | orchestrator | ## Images @ testbed-node-0 2026-04-17 01:15:46.732304 | orchestrator | 2026-04-17 01:15:46.732317 | orchestrator | + echo 2026-04-17 01:15:46.732329 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-17 01:15:46.732340 | orchestrator | + echo 2026-04-17 01:15:46.732351 | orchestrator | + osism container testbed-node-0 images 2026-04-17 01:15:48.153487 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 01:15:48.153584 | orchestrator | registry.osism.tech/osism/ceph-daemon reef daca25f73b90 21 hours ago 1.35GB 2026-04-17 01:15:48.153593 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 88ca19927a21 23 hours ago 322MB 2026-04-17 01:15:48.153599 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 005e5d90bd1e 23 hours ago 274MB 2026-04-17 01:15:48.153604 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 85fd573c5b6e 23 hours ago 411MB 2026-04-17 01:15:48.153610 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6c04853bd33 23 hours ago 276MB 2026-04-17 01:15:48.153615 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 69fc19195d51 23 hours ago 266MB 2026-04-17 01:15:48.153620 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 6c9ef22543ec 23 hours ago 668MB 2026-04-17 01:15:48.153625 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 27a59bb31ea5 23 hours ago 579MB 2026-04-17 01:15:48.153629 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7931d792ed30 23 hours ago 265MB 2026-04-17 01:15:48.153634 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8b98abb6416a 23 hours ago 452MB 2026-04-17 01:15:48.153656 | orchestrator | registry.osism.tech/kolla/redis 2024.2 06d42b68a282 23 hours ago 273MB 2026-04-17 01:15:48.153661 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 94c5471c716e 23 hours ago 273MB 2026-04-17 01:15:48.153666 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4869d3eb9072 23 hours ago 279MB 2026-04-17 01:15:48.153671 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d20b708e8170 23 hours ago 279MB 2026-04-17 01:15:48.153675 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 1d75bd0be0f9 23 hours ago 1.15GB 2026-04-17 01:15:48.153680 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 bb8015c3a246 23 hours ago 306MB 2026-04-17 01:15:48.153698 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 ebbeeecc611b 23 hours ago 292MB 2026-04-17 01:15:48.153703 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56a1e089dafa 23 hours ago 298MB 2026-04-17 01:15:48.153708 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8874d7da1d12 23 hours ago 301MB 2026-04-17 01:15:48.153713 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8fac8532c692 23 hours ago 357MB 2026-04-17 01:15:48.153717 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 87015b70d9b9 23 hours ago 840MB 2026-04-17 01:15:48.153722 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 69bc6f47d055 23 hours ago 840MB 2026-04-17 01:15:48.153726 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 74976dbd4b71 23 hours ago 840MB 2026-04-17 01:15:48.153731 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5c9aa7d2df72 23 hours ago 840MB 2026-04-17 01:15:48.153735 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 274b8a752d3d 23 hours ago 975MB 2026-04-17 01:15:48.153740 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 13a09ac8fee1 23 hours ago 1.03GB 2026-04-17 01:15:48.153745 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 fb4396954da7 23 hours ago 1.03GB 2026-04-17 01:15:48.153749 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 991169a048ae 23 hours ago 1.05GB 2026-04-17 01:15:48.153754 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ebcef39c8ab3 23 hours ago 1.03GB 2026-04-17 01:15:48.153758 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 25eb4d742e51 23 hours ago 1.05GB 2026-04-17 01:15:48.153763 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 fe1f14ccd0bb 23 hours ago 1.07GB 2026-04-17 01:15:48.153767 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 d6a114c55b7b 23 hours ago 1.04GB 2026-04-17 01:15:48.153772 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 bb314dbe2f64 23 hours ago 1.04GB 2026-04-17 01:15:48.153777 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 252da07acbfa 23 hours ago 975MB 2026-04-17 01:15:48.153781 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 e7e63be5af98 23 hours ago 976MB 2026-04-17 01:15:48.153786 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 5ef5624077e3 23 hours ago 1.1GB 2026-04-17 01:15:48.153807 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 78c6e3e80814 23 hours ago 1.13GB 2026-04-17 01:15:48.153812 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bff74236ac7 23 hours ago 1.24GB 2026-04-17 01:15:48.153817 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dc5680543153 23 hours ago 990MB 2026-04-17 01:15:48.153821 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e426952affd1 23 hours ago 990MB 2026-04-17 01:15:48.153830 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 06db0bb33324 23 hours ago 989MB 2026-04-17 01:15:48.153835 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 11fc5099bdf8 23 hours ago 1.05GB 2026-04-17 01:15:48.153840 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 558b44bdbeb2 23 hours ago 989MB 2026-04-17 01:15:48.153844 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5375a0db8e42 23 hours ago 1.21GB 2026-04-17 01:15:48.153849 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814e7e10d8e6 23 hours ago 1.21GB 2026-04-17 01:15:48.153857 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 26d2fa4a963a 23 hours ago 1.37GB 2026-04-17 01:15:48.153862 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 632d19e413a1 23 hours ago 1.21GB 2026-04-17 01:15:48.153867 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 db938c6f6505 23 hours ago 1.4GB 2026-04-17 01:15:48.153871 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 d97b4ddbb8e6 23 hours ago 973MB 2026-04-17 01:15:48.153876 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 31ff36df1082 23 hours ago 973MB 2026-04-17 01:15:48.153882 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 660f73ad17c2 23 hours ago 973MB 2026-04-17 01:15:48.153889 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 2db7db54f5d5 23 hours ago 973MB 2026-04-17 01:15:48.153896 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 85fc037c2f05 23 hours ago 983MB 2026-04-17 01:15:48.153903 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a940a925c66c 23 hours ago 988MB 2026-04-17 01:15:48.153910 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d679e0a4274d 23 hours ago 988MB 2026-04-17 01:15:48.153917 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9ece664d32b9 23 hours ago 984MB 2026-04-17 01:15:48.153925 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 913ec9429b26 23 hours ago 1.16GB 2026-04-17 01:15:48.153933 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 6db5b026dade 47 hours ago 1.57GB 2026-04-17 01:15:48.153941 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 03203e843c01 47 hours ago 1.54GB 2026-04-17 01:15:48.153951 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4a0f5ba104b9 47 hours ago 1.34GB 2026-04-17 01:15:48.153956 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 04d78d3e6ec1 47 hours ago 1.41GB 2026-04-17 01:15:48.153960 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 2a51adf886bd 47 hours ago 1.72GB 2026-04-17 01:15:48.153965 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 508588b33317 47 hours ago 1.42GB 2026-04-17 01:15:48.153970 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f916b2889117 2 days ago 992MB 2026-04-17 01:15:48.153974 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 98fb75734cff 2 days ago 992MB 2026-04-17 01:15:48.282337 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 01:15:48.282426 | orchestrator | ++ semver latest 5.0.0 2026-04-17 01:15:48.337700 | orchestrator | 2026-04-17 01:15:48.337805 | orchestrator | ## Containers @ testbed-node-1 2026-04-17 01:15:48.337848 | orchestrator | 2026-04-17 01:15:48.337863 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-17 01:15:48.337876 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 01:15:48.337889 | orchestrator | + echo 2026-04-17 01:15:48.337902 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-17 01:15:48.337910 | orchestrator | + echo 2026-04-17 01:15:48.337918 | orchestrator | + osism container testbed-node-1 ps 2026-04-17 01:15:49.769675 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 01:15:49.769780 | orchestrator | cc6375b259f9 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-17 01:15:49.769794 | orchestrator | 6c2b71f459f6 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-17 01:15:49.769801 | orchestrator | b0a4e7d6a85a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-17 01:15:49.769807 | orchestrator | 8f28bfb771a2 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-17 01:15:49.769813 | orchestrator | ecc673f6037d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-17 01:15:49.769840 | orchestrator | 760cdaefe9ba registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-17 01:15:49.769848 | orchestrator | b08b0ffbf06f registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-17 01:15:49.769854 | orchestrator | 4fcebbb7b8e2 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-17 01:15:49.769864 | orchestrator | 48d4932678df registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-17 01:15:49.769870 | orchestrator | c19bb19d0edf registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-17 01:15:49.769876 | orchestrator | b2288af07ad6 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-17 01:15:49.769883 | orchestrator | 57b588a24564 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-17 01:15:49.769889 | orchestrator | 5fd806f846b4 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2026-04-17 01:15:49.769896 | orchestrator | e71b0e192b46 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-17 01:15:49.769902 | orchestrator | 3f81541f5d45 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-17 01:15:49.769908 | orchestrator | 3f00e75b50ee registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-17 01:15:49.769914 | orchestrator | b1960260a713 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-17 01:15:49.769920 | orchestrator | bc57fddbd0e2 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-17 01:15:49.769926 | orchestrator | ce41d774302c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-17 01:15:49.769952 | orchestrator | c64017a1dfc5 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-17 01:15:49.769959 | orchestrator | 04508673c85e registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-17 01:15:49.770083 | orchestrator | fb06552890ab registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-17 01:15:49.770093 | orchestrator | 994b359e95e1 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-17 01:15:49.770099 | orchestrator | eb0a6f199445 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-17 01:15:49.770105 | orchestrator | 718a517aa325 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-17 01:15:49.770111 | orchestrator | c8b2b05030ba registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-17 01:15:49.770116 | orchestrator | 8f5396acc019 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-17 01:15:49.770130 | orchestrator | 91d20c66550b registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-17 01:15:49.770135 | orchestrator | 1ca3a980a863 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-17 01:15:49.770142 | orchestrator | b310355982be registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-17 01:15:49.770147 | orchestrator | c4b549649359 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-17 01:15:49.770153 | orchestrator | 346a929b117c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-17 01:15:49.770160 | orchestrator | 42529bcfdc20 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-17 01:15:49.770166 | orchestrator | 2bada31055b1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-04-17 01:15:49.770171 | orchestrator | 59d87adba644 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-17 01:15:49.770177 | orchestrator | 5e8568b6d311 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-17 01:15:49.770234 | orchestrator | a44d90ed9d0f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-17 01:15:49.770241 | orchestrator | 7f0a37b0da8d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-17 01:15:49.770246 | orchestrator | ca5ae91bb14b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-17 01:15:49.770261 | orchestrator | 26aab1bc7ffe registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-17 01:15:49.770267 | orchestrator | d9ea191107b0 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-17 01:15:49.770272 | orchestrator | 4831a49b184f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-17 01:15:49.770278 | orchestrator | a01ffe4581c4 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-17 01:15:49.770284 | orchestrator | 342159fc8395 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-17 01:15:49.770300 | orchestrator | 54dcbfa4160d registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-17 01:15:49.770308 | orchestrator | d1276d3c8a51 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_northd 2026-04-17 01:15:49.770314 | orchestrator | 033a033d5a98 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_sb_db 2026-04-17 01:15:49.770320 | orchestrator | 43992f1364b5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes ovn_nb_db 2026-04-17 01:15:49.770326 | orchestrator | 0a734bc522a3 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_controller 2026-04-17 01:15:49.770332 | orchestrator | 1202ae0c4591 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-17 01:15:49.770337 | orchestrator | 8e8b4d1c5b80 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-17 01:15:49.770343 | orchestrator | 507f6615589f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-17 01:15:49.770355 | orchestrator | 775548e4fda7 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-17 01:15:49.770362 | orchestrator | 3562b460eb77 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-17 01:15:49.770368 | orchestrator | bd2b3d8896a1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-17 01:15:49.770373 | orchestrator | 51e7641854b8 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-17 01:15:49.770377 | orchestrator | 5c58d04437ae registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-17 01:15:49.770381 | orchestrator | 85834973021b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-17 01:15:49.770390 | orchestrator | fbcd5c5e0a48 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-17 01:15:49.907065 | orchestrator | 2026-04-17 01:15:49.907139 | orchestrator | ## Images @ testbed-node-1 2026-04-17 01:15:49.907147 | orchestrator | 2026-04-17 01:15:49.907152 | orchestrator | + echo 2026-04-17 01:15:49.907157 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-17 01:15:49.907162 | orchestrator | + echo 2026-04-17 01:15:49.907166 | orchestrator | + osism container testbed-node-1 images 2026-04-17 01:15:51.387299 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 01:15:51.387421 | orchestrator | registry.osism.tech/osism/ceph-daemon reef daca25f73b90 21 hours ago 1.35GB 2026-04-17 01:15:51.387434 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 88ca19927a21 23 hours ago 322MB 2026-04-17 01:15:51.387441 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 005e5d90bd1e 23 hours ago 274MB 2026-04-17 01:15:51.387448 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 85fd573c5b6e 23 hours ago 411MB 2026-04-17 01:15:51.387455 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6c04853bd33 23 hours ago 276MB 2026-04-17 01:15:51.387461 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 69fc19195d51 23 hours ago 266MB 2026-04-17 01:15:51.387467 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 6c9ef22543ec 23 hours ago 668MB 2026-04-17 01:15:51.387475 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 27a59bb31ea5 23 hours ago 579MB 2026-04-17 01:15:51.387483 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7931d792ed30 23 hours ago 265MB 2026-04-17 01:15:51.387491 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8b98abb6416a 23 hours ago 452MB 2026-04-17 01:15:51.387498 | orchestrator | registry.osism.tech/kolla/redis 2024.2 06d42b68a282 23 hours ago 273MB 2026-04-17 01:15:51.387503 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 94c5471c716e 23 hours ago 273MB 2026-04-17 01:15:51.387510 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4869d3eb9072 23 hours ago 279MB 2026-04-17 01:15:51.387515 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d20b708e8170 23 hours ago 279MB 2026-04-17 01:15:51.387522 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 1d75bd0be0f9 23 hours ago 1.15GB 2026-04-17 01:15:51.387528 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 bb8015c3a246 23 hours ago 306MB 2026-04-17 01:15:51.387536 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56a1e089dafa 23 hours ago 298MB 2026-04-17 01:15:51.387541 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 ebbeeecc611b 23 hours ago 292MB 2026-04-17 01:15:51.387548 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8874d7da1d12 23 hours ago 301MB 2026-04-17 01:15:51.387554 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8fac8532c692 23 hours ago 357MB 2026-04-17 01:15:51.387560 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 87015b70d9b9 23 hours ago 840MB 2026-04-17 01:15:51.387566 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 69bc6f47d055 23 hours ago 840MB 2026-04-17 01:15:51.387572 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 74976dbd4b71 23 hours ago 840MB 2026-04-17 01:15:51.387579 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5c9aa7d2df72 23 hours ago 840MB 2026-04-17 01:15:51.387585 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 274b8a752d3d 23 hours ago 975MB 2026-04-17 01:15:51.387611 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 13a09ac8fee1 23 hours ago 1.03GB 2026-04-17 01:15:51.387616 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 fb4396954da7 23 hours ago 1.03GB 2026-04-17 01:15:51.387620 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 991169a048ae 23 hours ago 1.05GB 2026-04-17 01:15:51.387624 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ebcef39c8ab3 23 hours ago 1.03GB 2026-04-17 01:15:51.387628 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 25eb4d742e51 23 hours ago 1.05GB 2026-04-17 01:15:51.387632 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 fe1f14ccd0bb 23 hours ago 1.07GB 2026-04-17 01:15:51.387636 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 d6a114c55b7b 23 hours ago 1.04GB 2026-04-17 01:15:51.387640 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 bb314dbe2f64 23 hours ago 1.04GB 2026-04-17 01:15:51.387657 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 5ef5624077e3 23 hours ago 1.1GB 2026-04-17 01:15:51.387662 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 78c6e3e80814 23 hours ago 1.13GB 2026-04-17 01:15:51.387666 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bff74236ac7 23 hours ago 1.24GB 2026-04-17 01:15:51.387683 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dc5680543153 23 hours ago 990MB 2026-04-17 01:15:51.387687 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e426952affd1 23 hours ago 990MB 2026-04-17 01:15:51.387691 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 06db0bb33324 23 hours ago 989MB 2026-04-17 01:15:51.387694 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5375a0db8e42 23 hours ago 1.21GB 2026-04-17 01:15:51.387698 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814e7e10d8e6 23 hours ago 1.21GB 2026-04-17 01:15:51.387702 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 26d2fa4a963a 23 hours ago 1.37GB 2026-04-17 01:15:51.387705 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 632d19e413a1 23 hours ago 1.21GB 2026-04-17 01:15:51.387709 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 db938c6f6505 23 hours ago 1.4GB 2026-04-17 01:15:51.387713 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 85fc037c2f05 23 hours ago 983MB 2026-04-17 01:15:51.387716 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a940a925c66c 23 hours ago 988MB 2026-04-17 01:15:51.387720 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d679e0a4274d 23 hours ago 988MB 2026-04-17 01:15:51.387724 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9ece664d32b9 23 hours ago 984MB 2026-04-17 01:15:51.387727 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 913ec9429b26 23 hours ago 1.16GB 2026-04-17 01:15:51.387731 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 6db5b026dade 47 hours ago 1.57GB 2026-04-17 01:15:51.387735 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 03203e843c01 47 hours ago 1.54GB 2026-04-17 01:15:51.387739 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4a0f5ba104b9 47 hours ago 1.34GB 2026-04-17 01:15:51.387742 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 04d78d3e6ec1 47 hours ago 1.41GB 2026-04-17 01:15:51.387746 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 2a51adf886bd 47 hours ago 1.72GB 2026-04-17 01:15:51.387749 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 508588b33317 47 hours ago 1.42GB 2026-04-17 01:15:51.387757 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f916b2889117 2 days ago 992MB 2026-04-17 01:15:51.387761 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 98fb75734cff 2 days ago 992MB 2026-04-17 01:15:51.520421 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-17 01:15:51.520978 | orchestrator | ++ semver latest 5.0.0 2026-04-17 01:15:51.564864 | orchestrator | 2026-04-17 01:15:51.564933 | orchestrator | ## Containers @ testbed-node-2 2026-04-17 01:15:51.564940 | orchestrator | 2026-04-17 01:15:51.564945 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-17 01:15:51.564949 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 01:15:51.564954 | orchestrator | + echo 2026-04-17 01:15:51.564959 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-17 01:15:51.564964 | orchestrator | + echo 2026-04-17 01:15:51.564968 | orchestrator | + osism container testbed-node-2 ps 2026-04-17 01:15:53.043767 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-17 01:15:53.043858 | orchestrator | 9c4f0bee0f31 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-17 01:15:53.043870 | orchestrator | 402c78e401a1 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-17 01:15:53.043877 | orchestrator | 6c8185e441d9 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-17 01:15:53.043884 | orchestrator | 9f5dbbe020c0 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-17 01:15:53.043890 | orchestrator | 9736ca060b18 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-17 01:15:53.043897 | orchestrator | 35404fadc254 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-17 01:15:53.043903 | orchestrator | 269f8354b5f9 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-17 01:15:53.043909 | orchestrator | 7f6ca67b5c77 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-17 01:15:53.043916 | orchestrator | b39b2146ae5a registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-17 01:15:53.043922 | orchestrator | e94c1cbff5be registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-17 01:15:53.043929 | orchestrator | 6436a061d268 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-17 01:15:53.043935 | orchestrator | 98168cb7503e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-17 01:15:53.043941 | orchestrator | 860150a90cc6 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_mdns 2026-04-17 01:15:53.043947 | orchestrator | 23f790c72d38 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-17 01:15:53.043962 | orchestrator | 6da53b227279 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-17 01:15:53.043990 | orchestrator | e28fc7b77f0e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-17 01:15:53.043997 | orchestrator | 44673d52d6b6 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-17 01:15:53.044003 | orchestrator | cc7098e11aaa registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-17 01:15:53.044009 | orchestrator | e5211d22c028 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-17 01:15:53.044015 | orchestrator | 92f09869f463 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-17 01:15:53.044021 | orchestrator | 05cfa55d1935 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-17 01:15:53.044042 | orchestrator | 17800b482b87 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-17 01:15:53.044048 | orchestrator | 6881ebed5376 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-17 01:15:53.044054 | orchestrator | 98a7fd745509 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-17 01:15:53.044061 | orchestrator | 6a63a06fa911 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-17 01:15:53.044067 | orchestrator | d489f4969678 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-17 01:15:53.044074 | orchestrator | 6252848e576a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-17 01:15:53.044081 | orchestrator | 02b8b8feeb61 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-17 01:15:53.044087 | orchestrator | 613b713c9151 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-04-17 01:15:53.044094 | orchestrator | 79cffed03ca4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-17 01:15:53.044101 | orchestrator | ceab1713fab7 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-17 01:15:53.044107 | orchestrator | b401ed9928ee registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-17 01:15:53.044113 | orchestrator | 6a57c65d0e8c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-17 01:15:53.044119 | orchestrator | c77cb9f47179 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-17 01:15:53.044131 | orchestrator | 37b55af55d93 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-04-17 01:15:53.044138 | orchestrator | b79c682f3a98 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-17 01:15:53.044144 | orchestrator | 5471bdd80a98 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-17 01:15:53.044150 | orchestrator | ebc47d128e9b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-17 01:15:53.044156 | orchestrator | 5c7dad61318c registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-17 01:15:53.044163 | orchestrator | 42d41481ba9b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 19 minutes (healthy) mariadb 2026-04-17 01:15:53.044169 | orchestrator | 3497e1b19806 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-17 01:15:53.044175 | orchestrator | 8f20c6faf989 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-17 01:15:53.044265 | orchestrator | 40e90ce023e3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-17 01:15:53.044272 | orchestrator | d7f5870d2a59 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-17 01:15:53.044295 | orchestrator | 9b34f7547e3e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-17 01:15:53.044302 | orchestrator | cb93d2cf9532 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_northd 2026-04-17 01:15:53.044309 | orchestrator | b9ae0280c541 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes ovn_sb_db 2026-04-17 01:15:53.044316 | orchestrator | 91e7eb609680 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes ovn_nb_db 2026-04-17 01:15:53.044323 | orchestrator | 15546cb1b61d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_controller 2026-04-17 01:15:53.044335 | orchestrator | c3716ca78541 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-17 01:15:53.044342 | orchestrator | ce2e9b0b68a5 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) openvswitch_vswitchd 2026-04-17 01:15:53.044348 | orchestrator | 42ccd5fa2fbc registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-04-17 01:15:53.044362 | orchestrator | 6ed0a0c00b8c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-17 01:15:53.044376 | orchestrator | e7bae393e80f registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-17 01:15:53.044388 | orchestrator | b57108d08b67 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-17 01:15:53.044395 | orchestrator | 1771ae18fbde registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) memcached 2026-04-17 01:15:53.044401 | orchestrator | e0dac94446c0 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-17 01:15:53.044407 | orchestrator | c82fc621c6f6 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-17 01:15:53.044413 | orchestrator | 44df23804784 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-17 01:15:53.188480 | orchestrator | 2026-04-17 01:15:53.188570 | orchestrator | ## Images @ testbed-node-2 2026-04-17 01:15:53.188581 | orchestrator | 2026-04-17 01:15:53.188589 | orchestrator | + echo 2026-04-17 01:15:53.188596 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-17 01:15:53.188604 | orchestrator | + echo 2026-04-17 01:15:53.188610 | orchestrator | + osism container testbed-node-2 images 2026-04-17 01:15:54.628000 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-17 01:15:54.628064 | orchestrator | registry.osism.tech/osism/ceph-daemon reef daca25f73b90 21 hours ago 1.35GB 2026-04-17 01:15:54.628074 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 88ca19927a21 23 hours ago 322MB 2026-04-17 01:15:54.628091 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 005e5d90bd1e 23 hours ago 274MB 2026-04-17 01:15:54.628098 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 85fd573c5b6e 23 hours ago 411MB 2026-04-17 01:15:54.628108 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6c04853bd33 23 hours ago 276MB 2026-04-17 01:15:54.628117 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 69fc19195d51 23 hours ago 266MB 2026-04-17 01:15:54.628126 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 6c9ef22543ec 23 hours ago 668MB 2026-04-17 01:15:54.628135 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 27a59bb31ea5 23 hours ago 579MB 2026-04-17 01:15:54.628273 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7931d792ed30 23 hours ago 265MB 2026-04-17 01:15:54.628288 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8b98abb6416a 23 hours ago 452MB 2026-04-17 01:15:54.628297 | orchestrator | registry.osism.tech/kolla/redis 2024.2 06d42b68a282 23 hours ago 273MB 2026-04-17 01:15:54.628306 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 94c5471c716e 23 hours ago 273MB 2026-04-17 01:15:54.628314 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 4869d3eb9072 23 hours ago 279MB 2026-04-17 01:15:54.628323 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d20b708e8170 23 hours ago 279MB 2026-04-17 01:15:54.628331 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 1d75bd0be0f9 23 hours ago 1.15GB 2026-04-17 01:15:54.628340 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 bb8015c3a246 23 hours ago 306MB 2026-04-17 01:15:54.628348 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 ebbeeecc611b 23 hours ago 292MB 2026-04-17 01:15:54.628357 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56a1e089dafa 23 hours ago 298MB 2026-04-17 01:15:54.628365 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8874d7da1d12 23 hours ago 301MB 2026-04-17 01:15:54.628389 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 8fac8532c692 23 hours ago 357MB 2026-04-17 01:15:54.628398 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 87015b70d9b9 23 hours ago 840MB 2026-04-17 01:15:54.628406 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 69bc6f47d055 23 hours ago 840MB 2026-04-17 01:15:54.628414 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 74976dbd4b71 23 hours ago 840MB 2026-04-17 01:15:54.628422 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 5c9aa7d2df72 23 hours ago 840MB 2026-04-17 01:15:54.628431 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 274b8a752d3d 23 hours ago 975MB 2026-04-17 01:15:54.628439 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 13a09ac8fee1 23 hours ago 1.03GB 2026-04-17 01:15:54.628447 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 fb4396954da7 23 hours ago 1.03GB 2026-04-17 01:15:54.628453 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 991169a048ae 23 hours ago 1.05GB 2026-04-17 01:15:54.628459 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 ebcef39c8ab3 23 hours ago 1.03GB 2026-04-17 01:15:54.628465 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 25eb4d742e51 23 hours ago 1.05GB 2026-04-17 01:15:54.628471 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 fe1f14ccd0bb 23 hours ago 1.07GB 2026-04-17 01:15:54.628477 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 d6a114c55b7b 23 hours ago 1.04GB 2026-04-17 01:15:54.628483 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 bb314dbe2f64 23 hours ago 1.04GB 2026-04-17 01:15:54.628494 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 5ef5624077e3 23 hours ago 1.1GB 2026-04-17 01:15:54.628504 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 78c6e3e80814 23 hours ago 1.13GB 2026-04-17 01:15:54.628514 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 7bff74236ac7 23 hours ago 1.24GB 2026-04-17 01:15:54.628524 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dc5680543153 23 hours ago 990MB 2026-04-17 01:15:54.628534 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e426952affd1 23 hours ago 990MB 2026-04-17 01:15:54.628545 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 06db0bb33324 23 hours ago 989MB 2026-04-17 01:15:54.628555 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 5375a0db8e42 23 hours ago 1.21GB 2026-04-17 01:15:54.628566 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814e7e10d8e6 23 hours ago 1.21GB 2026-04-17 01:15:54.628576 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 26d2fa4a963a 23 hours ago 1.37GB 2026-04-17 01:15:54.628588 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 632d19e413a1 23 hours ago 1.21GB 2026-04-17 01:15:54.628597 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 db938c6f6505 23 hours ago 1.4GB 2026-04-17 01:15:54.628607 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 85fc037c2f05 23 hours ago 983MB 2026-04-17 01:15:54.628618 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 a940a925c66c 23 hours ago 988MB 2026-04-17 01:15:54.628640 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d679e0a4274d 23 hours ago 988MB 2026-04-17 01:15:54.628650 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 9ece664d32b9 23 hours ago 984MB 2026-04-17 01:15:54.628670 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 913ec9429b26 23 hours ago 1.16GB 2026-04-17 01:15:54.628690 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 6db5b026dade 47 hours ago 1.57GB 2026-04-17 01:15:54.628701 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 03203e843c01 47 hours ago 1.54GB 2026-04-17 01:15:54.628711 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4a0f5ba104b9 47 hours ago 1.34GB 2026-04-17 01:15:54.628721 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 04d78d3e6ec1 47 hours ago 1.41GB 2026-04-17 01:15:54.628733 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 2a51adf886bd 47 hours ago 1.72GB 2026-04-17 01:15:54.628744 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 508588b33317 47 hours ago 1.42GB 2026-04-17 01:15:54.628754 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f916b2889117 2 days ago 992MB 2026-04-17 01:15:54.628765 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 98fb75734cff 2 days ago 992MB 2026-04-17 01:15:54.760826 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-17 01:15:54.766845 | orchestrator | + set -e 2026-04-17 01:15:54.766911 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 01:15:54.768504 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 01:15:54.768550 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 01:15:54.768559 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 01:15:54.768566 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 01:15:54.768574 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 01:15:54.768581 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 01:15:54.768588 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:15:54.768594 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:15:54.768601 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 01:15:54.768607 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 01:15:54.768614 | orchestrator | ++ export ARA=false 2026-04-17 01:15:54.768621 | orchestrator | ++ ARA=false 2026-04-17 01:15:54.768627 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 01:15:54.768633 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 01:15:54.768639 | orchestrator | ++ export TEMPEST=true 2026-04-17 01:15:54.768646 | orchestrator | ++ TEMPEST=true 2026-04-17 01:15:54.768652 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 01:15:54.768659 | orchestrator | ++ IS_ZUUL=true 2026-04-17 01:15:54.768665 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:15:54.768672 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:15:54.768679 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 01:15:54.768685 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 01:15:54.768692 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 01:15:54.768698 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 01:15:54.768705 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 01:15:54.768711 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 01:15:54.768718 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 01:15:54.768724 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 01:15:54.768731 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-17 01:15:54.768737 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-17 01:15:54.776145 | orchestrator | + set -e 2026-04-17 01:15:54.776803 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 01:15:54.776825 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 01:15:54.776831 | orchestrator | ++ INTERACTIVE=false 2026-04-17 01:15:54.776836 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 01:15:54.776840 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 01:15:54.776845 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 01:15:54.777532 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 01:15:54.783757 | orchestrator | 2026-04-17 01:15:54.783815 | orchestrator | # Ceph status 2026-04-17 01:15:54.783823 | orchestrator | 2026-04-17 01:15:54.783830 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:15:54.783837 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:15:54.783845 | orchestrator | + echo 2026-04-17 01:15:54.783851 | orchestrator | + echo '# Ceph status' 2026-04-17 01:15:54.783858 | orchestrator | + echo 2026-04-17 01:15:54.783864 | orchestrator | + ceph -s 2026-04-17 01:15:55.355019 | orchestrator | cluster: 2026-04-17 01:15:55.355088 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-17 01:15:55.355094 | orchestrator | health: HEALTH_OK 2026-04-17 01:15:55.355099 | orchestrator | 2026-04-17 01:15:55.355104 | orchestrator | services: 2026-04-17 01:15:55.355108 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-17 01:15:55.355112 | orchestrator | mgr: testbed-node-1(active, since 15m), standbys: testbed-node-0, testbed-node-2 2026-04-17 01:15:55.355117 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-17 01:15:55.355121 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-17 01:15:55.355125 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-17 01:15:55.355129 | orchestrator | 2026-04-17 01:15:55.355133 | orchestrator | data: 2026-04-17 01:15:55.355137 | orchestrator | volumes: 1/1 healthy 2026-04-17 01:15:55.355141 | orchestrator | pools: 14 pools, 401 pgs 2026-04-17 01:15:55.355145 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-17 01:15:55.355149 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-17 01:15:55.355153 | orchestrator | pgs: 401 active+clean 2026-04-17 01:15:55.355157 | orchestrator | 2026-04-17 01:15:55.394904 | orchestrator | 2026-04-17 01:15:55.394953 | orchestrator | # Ceph versions 2026-04-17 01:15:55.394958 | orchestrator | 2026-04-17 01:15:55.394963 | orchestrator | + echo 2026-04-17 01:15:55.394967 | orchestrator | + echo '# Ceph versions' 2026-04-17 01:15:55.394972 | orchestrator | + echo 2026-04-17 01:15:55.394976 | orchestrator | + ceph versions 2026-04-17 01:15:56.011788 | orchestrator | { 2026-04-17 01:15:56.011853 | orchestrator | "mon": { 2026-04-17 01:15:56.011863 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-17 01:15:56.011871 | orchestrator | }, 2026-04-17 01:15:56.011879 | orchestrator | "mgr": { 2026-04-17 01:15:56.011897 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-17 01:15:56.011905 | orchestrator | }, 2026-04-17 01:15:56.011912 | orchestrator | "osd": { 2026-04-17 01:15:56.011922 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-17 01:15:56.011933 | orchestrator | }, 2026-04-17 01:15:56.011947 | orchestrator | "mds": { 2026-04-17 01:15:56.011965 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-17 01:15:56.011975 | orchestrator | }, 2026-04-17 01:15:56.011986 | orchestrator | "rgw": { 2026-04-17 01:15:56.011998 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-17 01:15:56.012010 | orchestrator | }, 2026-04-17 01:15:56.012020 | orchestrator | "overall": { 2026-04-17 01:15:56.012032 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-17 01:15:56.012044 | orchestrator | } 2026-04-17 01:15:56.012056 | orchestrator | } 2026-04-17 01:15:56.053746 | orchestrator | 2026-04-17 01:15:56.053815 | orchestrator | # Ceph OSD tree 2026-04-17 01:15:56.053823 | orchestrator | 2026-04-17 01:15:56.053829 | orchestrator | + echo 2026-04-17 01:15:56.053835 | orchestrator | + echo '# Ceph OSD tree' 2026-04-17 01:15:56.053842 | orchestrator | + echo 2026-04-17 01:15:56.053847 | orchestrator | + ceph osd df tree 2026-04-17 01:15:56.561258 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-17 01:15:56.561318 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-04-17 01:15:56.561324 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-17 01:15:56.561328 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.0 GiB 963 MiB 1 KiB 70 MiB 19 GiB 5.04 0.85 189 up osd.0 2026-04-17 01:15:56.561332 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.79 1.15 201 up osd.3 2026-04-17 01:15:56.561336 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-17 01:15:56.561340 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.20 1.05 190 up osd.1 2026-04-17 01:15:56.561344 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.63 0.95 202 up osd.4 2026-04-17 01:15:56.561361 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-04-17 01:15:56.561365 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 191 up osd.2 2026-04-17 01:15:56.561368 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 981 MiB 907 MiB 1 KiB 74 MiB 19 GiB 4.79 0.81 197 up osd.5 2026-04-17 01:15:56.561372 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-04-17 01:15:56.561376 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.84 2026-04-17 01:15:56.604837 | orchestrator | 2026-04-17 01:15:56.604884 | orchestrator | # Ceph monitor status 2026-04-17 01:15:56.604890 | orchestrator | 2026-04-17 01:15:56.604895 | orchestrator | + echo 2026-04-17 01:15:56.604899 | orchestrator | + echo '# Ceph monitor status' 2026-04-17 01:15:56.604903 | orchestrator | + echo 2026-04-17 01:15:56.604908 | orchestrator | + ceph mon stat 2026-04-17 01:15:57.223146 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-17 01:15:57.266314 | orchestrator | 2026-04-17 01:15:57.266371 | orchestrator | # Ceph quorum status 2026-04-17 01:15:57.266380 | orchestrator | 2026-04-17 01:15:57.266386 | orchestrator | + echo 2026-04-17 01:15:57.266391 | orchestrator | + echo '# Ceph quorum status' 2026-04-17 01:15:57.266395 | orchestrator | + echo 2026-04-17 01:15:57.266494 | orchestrator | + ceph quorum_status 2026-04-17 01:15:57.266916 | orchestrator | + jq 2026-04-17 01:15:57.856948 | orchestrator | { 2026-04-17 01:15:57.857003 | orchestrator | "election_epoch": 8, 2026-04-17 01:15:57.857010 | orchestrator | "quorum": [ 2026-04-17 01:15:57.857015 | orchestrator | 0, 2026-04-17 01:15:57.857019 | orchestrator | 1, 2026-04-17 01:15:57.857024 | orchestrator | 2 2026-04-17 01:15:57.857028 | orchestrator | ], 2026-04-17 01:15:57.857032 | orchestrator | "quorum_names": [ 2026-04-17 01:15:57.857036 | orchestrator | "testbed-node-0", 2026-04-17 01:15:57.857040 | orchestrator | "testbed-node-1", 2026-04-17 01:15:57.857045 | orchestrator | "testbed-node-2" 2026-04-17 01:15:57.857049 | orchestrator | ], 2026-04-17 01:15:57.857053 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-17 01:15:57.857057 | orchestrator | "quorum_age": 1559, 2026-04-17 01:15:57.857061 | orchestrator | "features": { 2026-04-17 01:15:57.857065 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-17 01:15:57.857069 | orchestrator | "quorum_mon": [ 2026-04-17 01:15:57.857073 | orchestrator | "kraken", 2026-04-17 01:15:57.857077 | orchestrator | "luminous", 2026-04-17 01:15:57.857081 | orchestrator | "mimic", 2026-04-17 01:15:57.857085 | orchestrator | "osdmap-prune", 2026-04-17 01:15:57.857089 | orchestrator | "nautilus", 2026-04-17 01:15:57.857093 | orchestrator | "octopus", 2026-04-17 01:15:57.857097 | orchestrator | "pacific", 2026-04-17 01:15:57.857101 | orchestrator | "elector-pinging", 2026-04-17 01:15:57.857104 | orchestrator | "quincy", 2026-04-17 01:15:57.857108 | orchestrator | "reef" 2026-04-17 01:15:57.857112 | orchestrator | ] 2026-04-17 01:15:57.857116 | orchestrator | }, 2026-04-17 01:15:57.857120 | orchestrator | "monmap": { 2026-04-17 01:15:57.857124 | orchestrator | "epoch": 1, 2026-04-17 01:15:57.857128 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-17 01:15:57.857133 | orchestrator | "modified": "2026-04-17T00:49:39.897765Z", 2026-04-17 01:15:57.857137 | orchestrator | "created": "2026-04-17T00:49:39.897765Z", 2026-04-17 01:15:57.857140 | orchestrator | "min_mon_release": 18, 2026-04-17 01:15:57.857144 | orchestrator | "min_mon_release_name": "reef", 2026-04-17 01:15:57.857148 | orchestrator | "election_strategy": 1, 2026-04-17 01:15:57.857152 | orchestrator | "disallowed_leaders": "", 2026-04-17 01:15:57.857156 | orchestrator | "stretch_mode": false, 2026-04-17 01:15:57.857160 | orchestrator | "tiebreaker_mon": "", 2026-04-17 01:15:57.857164 | orchestrator | "removed_ranks": "", 2026-04-17 01:15:57.857168 | orchestrator | "features": { 2026-04-17 01:15:57.857172 | orchestrator | "persistent": [ 2026-04-17 01:15:57.857176 | orchestrator | "kraken", 2026-04-17 01:15:57.857217 | orchestrator | "luminous", 2026-04-17 01:15:57.857221 | orchestrator | "mimic", 2026-04-17 01:15:57.857225 | orchestrator | "osdmap-prune", 2026-04-17 01:15:57.857245 | orchestrator | "nautilus", 2026-04-17 01:15:57.857250 | orchestrator | "octopus", 2026-04-17 01:15:57.857254 | orchestrator | "pacific", 2026-04-17 01:15:57.857258 | orchestrator | "elector-pinging", 2026-04-17 01:15:57.857261 | orchestrator | "quincy", 2026-04-17 01:15:57.857265 | orchestrator | "reef" 2026-04-17 01:15:57.857269 | orchestrator | ], 2026-04-17 01:15:57.857273 | orchestrator | "optional": [] 2026-04-17 01:15:57.857278 | orchestrator | }, 2026-04-17 01:15:57.857285 | orchestrator | "mons": [ 2026-04-17 01:15:57.857292 | orchestrator | { 2026-04-17 01:15:57.857298 | orchestrator | "rank": 0, 2026-04-17 01:15:57.857304 | orchestrator | "name": "testbed-node-0", 2026-04-17 01:15:57.857310 | orchestrator | "public_addrs": { 2026-04-17 01:15:57.857316 | orchestrator | "addrvec": [ 2026-04-17 01:15:57.857322 | orchestrator | { 2026-04-17 01:15:57.857328 | orchestrator | "type": "v2", 2026-04-17 01:15:57.857334 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-17 01:15:57.857340 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857347 | orchestrator | }, 2026-04-17 01:15:57.857354 | orchestrator | { 2026-04-17 01:15:57.857360 | orchestrator | "type": "v1", 2026-04-17 01:15:57.857367 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-17 01:15:57.857373 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857380 | orchestrator | } 2026-04-17 01:15:57.857387 | orchestrator | ] 2026-04-17 01:15:57.857394 | orchestrator | }, 2026-04-17 01:15:57.857399 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-17 01:15:57.857403 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-17 01:15:57.857407 | orchestrator | "priority": 0, 2026-04-17 01:15:57.857412 | orchestrator | "weight": 0, 2026-04-17 01:15:57.857416 | orchestrator | "crush_location": "{}" 2026-04-17 01:15:57.857420 | orchestrator | }, 2026-04-17 01:15:57.857424 | orchestrator | { 2026-04-17 01:15:57.857427 | orchestrator | "rank": 1, 2026-04-17 01:15:57.857431 | orchestrator | "name": "testbed-node-1", 2026-04-17 01:15:57.857435 | orchestrator | "public_addrs": { 2026-04-17 01:15:57.857439 | orchestrator | "addrvec": [ 2026-04-17 01:15:57.857443 | orchestrator | { 2026-04-17 01:15:57.857447 | orchestrator | "type": "v2", 2026-04-17 01:15:57.857451 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-17 01:15:57.857455 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857459 | orchestrator | }, 2026-04-17 01:15:57.857463 | orchestrator | { 2026-04-17 01:15:57.857468 | orchestrator | "type": "v1", 2026-04-17 01:15:57.857472 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-17 01:15:57.857475 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857479 | orchestrator | } 2026-04-17 01:15:57.857483 | orchestrator | ] 2026-04-17 01:15:57.857487 | orchestrator | }, 2026-04-17 01:15:57.857506 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-17 01:15:57.857515 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-17 01:15:57.857519 | orchestrator | "priority": 0, 2026-04-17 01:15:57.857523 | orchestrator | "weight": 0, 2026-04-17 01:15:57.857527 | orchestrator | "crush_location": "{}" 2026-04-17 01:15:57.857531 | orchestrator | }, 2026-04-17 01:15:57.857636 | orchestrator | { 2026-04-17 01:15:57.857643 | orchestrator | "rank": 2, 2026-04-17 01:15:57.857648 | orchestrator | "name": "testbed-node-2", 2026-04-17 01:15:57.857653 | orchestrator | "public_addrs": { 2026-04-17 01:15:57.857657 | orchestrator | "addrvec": [ 2026-04-17 01:15:57.857662 | orchestrator | { 2026-04-17 01:15:57.857666 | orchestrator | "type": "v2", 2026-04-17 01:15:57.857671 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-17 01:15:57.857676 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857681 | orchestrator | }, 2026-04-17 01:15:57.857687 | orchestrator | { 2026-04-17 01:15:57.857694 | orchestrator | "type": "v1", 2026-04-17 01:15:57.857700 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-17 01:15:57.857706 | orchestrator | "nonce": 0 2026-04-17 01:15:57.857713 | orchestrator | } 2026-04-17 01:15:57.857719 | orchestrator | ] 2026-04-17 01:15:57.857725 | orchestrator | }, 2026-04-17 01:15:57.857731 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-17 01:15:57.857738 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-17 01:15:57.857745 | orchestrator | "priority": 0, 2026-04-17 01:15:57.857751 | orchestrator | "weight": 0, 2026-04-17 01:15:57.857758 | orchestrator | "crush_location": "{}" 2026-04-17 01:15:57.857787 | orchestrator | } 2026-04-17 01:15:57.857791 | orchestrator | ] 2026-04-17 01:15:57.857795 | orchestrator | } 2026-04-17 01:15:57.857799 | orchestrator | } 2026-04-17 01:15:57.857810 | orchestrator | 2026-04-17 01:15:57.857814 | orchestrator | # Ceph free space status 2026-04-17 01:15:57.857818 | orchestrator | 2026-04-17 01:15:57.857822 | orchestrator | + echo 2026-04-17 01:15:57.857826 | orchestrator | + echo '# Ceph free space status' 2026-04-17 01:15:57.857830 | orchestrator | + echo 2026-04-17 01:15:57.857834 | orchestrator | + ceph df 2026-04-17 01:15:58.442913 | orchestrator | --- RAW STORAGE --- 2026-04-17 01:15:58.443017 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-17 01:15:58.443043 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-17 01:15:58.443061 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-17 01:15:58.443072 | orchestrator | 2026-04-17 01:15:58.443082 | orchestrator | --- POOLS --- 2026-04-17 01:15:58.443093 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-17 01:15:58.443104 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-17 01:15:58.443113 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-17 01:15:58.443123 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-17 01:15:58.443133 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-17 01:15:58.443143 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-17 01:15:58.443152 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-17 01:15:58.443162 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-17 01:15:58.443172 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-17 01:15:58.443268 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-17 01:15:58.443287 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 01:15:58.443302 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 01:15:58.443318 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-17 01:15:58.443336 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 01:15:58.443355 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-17 01:15:58.497132 | orchestrator | ++ semver latest 5.0.0 2026-04-17 01:15:58.548483 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-17 01:15:58.548540 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-17 01:15:58.548548 | orchestrator | + osism apply facts 2026-04-17 01:16:10.041942 | orchestrator | 2026-04-17 01:16:10 | INFO  | Prepare task for execution of facts. 2026-04-17 01:16:10.122616 | orchestrator | 2026-04-17 01:16:10 | INFO  | Task 612cbc55-6361-4978-8ec1-94b925c67d4e (facts) was prepared for execution. 2026-04-17 01:16:10.122706 | orchestrator | 2026-04-17 01:16:10 | INFO  | It takes a moment until task 612cbc55-6361-4978-8ec1-94b925c67d4e (facts) has been started and output is visible here. 2026-04-17 01:16:22.361987 | orchestrator | 2026-04-17 01:16:22.362163 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-17 01:16:22.362278 | orchestrator | 2026-04-17 01:16:22.362297 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-17 01:16:22.362313 | orchestrator | Friday 17 April 2026 01:16:13 +0000 (0:00:00.342) 0:00:00.342 ********** 2026-04-17 01:16:22.362330 | orchestrator | ok: [testbed-manager] 2026-04-17 01:16:22.362349 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:22.362366 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:22.362383 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:22.362401 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:16:22.362419 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:16:22.362437 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:16:22.362456 | orchestrator | 2026-04-17 01:16:22.362474 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-17 01:16:22.362525 | orchestrator | Friday 17 April 2026 01:16:14 +0000 (0:00:01.338) 0:00:01.681 ********** 2026-04-17 01:16:22.362547 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:16:22.362583 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:22.362603 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:16:22.362622 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:16:22.362639 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:16:22.362658 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:16:22.362675 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:16:22.362694 | orchestrator | 2026-04-17 01:16:22.362714 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-17 01:16:22.362732 | orchestrator | 2026-04-17 01:16:22.362750 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-17 01:16:22.362770 | orchestrator | Friday 17 April 2026 01:16:16 +0000 (0:00:01.227) 0:00:02.908 ********** 2026-04-17 01:16:22.362789 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:22.362838 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:22.362850 | orchestrator | ok: [testbed-manager] 2026-04-17 01:16:22.362861 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:22.362871 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:16:22.362890 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:16:22.362907 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:16:22.362927 | orchestrator | 2026-04-17 01:16:22.362946 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-17 01:16:22.362965 | orchestrator | 2026-04-17 01:16:22.362982 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-17 01:16:22.362993 | orchestrator | Friday 17 April 2026 01:16:21 +0000 (0:00:05.364) 0:00:08.272 ********** 2026-04-17 01:16:22.363004 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:16:22.363015 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:22.363026 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:16:22.363036 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:16:22.363047 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:16:22.363057 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:16:22.363068 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:16:22.363080 | orchestrator | 2026-04-17 01:16:22.363090 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:16:22.363102 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363114 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363125 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363136 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363147 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363158 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363169 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:22.363229 | orchestrator | 2026-04-17 01:16:22.363241 | orchestrator | 2026-04-17 01:16:22.363252 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:16:22.363263 | orchestrator | Friday 17 April 2026 01:16:22 +0000 (0:00:00.660) 0:00:08.933 ********** 2026-04-17 01:16:22.363283 | orchestrator | =============================================================================== 2026-04-17 01:16:22.363301 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2026-04-17 01:16:22.363336 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-17 01:16:22.363355 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-04-17 01:16:22.363375 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2026-04-17 01:16:22.523607 | orchestrator | + osism validate ceph-mons 2026-04-17 01:16:53.187630 | orchestrator | 2026-04-17 01:16:53.187714 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-17 01:16:53.187722 | orchestrator | 2026-04-17 01:16:53.187726 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 01:16:53.187731 | orchestrator | Friday 17 April 2026 01:16:37 +0000 (0:00:00.517) 0:00:00.517 ********** 2026-04-17 01:16:53.187736 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.187740 | orchestrator | 2026-04-17 01:16:53.187744 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 01:16:53.187750 | orchestrator | Friday 17 April 2026 01:16:38 +0000 (0:00:00.981) 0:00:01.499 ********** 2026-04-17 01:16:53.187757 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.187763 | orchestrator | 2026-04-17 01:16:53.187773 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 01:16:53.187783 | orchestrator | Friday 17 April 2026 01:16:39 +0000 (0:00:00.681) 0:00:02.180 ********** 2026-04-17 01:16:53.187789 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.187796 | orchestrator | 2026-04-17 01:16:53.187803 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 01:16:53.187808 | orchestrator | Friday 17 April 2026 01:16:39 +0000 (0:00:00.147) 0:00:02.327 ********** 2026-04-17 01:16:53.187814 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.187820 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:53.187826 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:53.187832 | orchestrator | 2026-04-17 01:16:53.187838 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 01:16:53.187844 | orchestrator | Friday 17 April 2026 01:16:39 +0000 (0:00:00.290) 0:00:02.617 ********** 2026-04-17 01:16:53.187851 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:53.187857 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:53.187863 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.187870 | orchestrator | 2026-04-17 01:16:53.187877 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 01:16:53.187884 | orchestrator | Friday 17 April 2026 01:16:41 +0000 (0:00:01.565) 0:00:04.182 ********** 2026-04-17 01:16:53.187890 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.187896 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:16:53.187904 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:16:53.187911 | orchestrator | 2026-04-17 01:16:53.187918 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 01:16:53.187924 | orchestrator | Friday 17 April 2026 01:16:41 +0000 (0:00:00.294) 0:00:04.477 ********** 2026-04-17 01:16:53.187931 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.187938 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:53.187944 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:53.187950 | orchestrator | 2026-04-17 01:16:53.187957 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:16:53.187963 | orchestrator | Friday 17 April 2026 01:16:41 +0000 (0:00:00.321) 0:00:04.799 ********** 2026-04-17 01:16:53.187970 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.187976 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:53.187982 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:53.187989 | orchestrator | 2026-04-17 01:16:53.187995 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-17 01:16:53.188002 | orchestrator | Friday 17 April 2026 01:16:42 +0000 (0:00:00.339) 0:00:05.138 ********** 2026-04-17 01:16:53.188008 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188028 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:16:53.188035 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:16:53.188040 | orchestrator | 2026-04-17 01:16:53.188047 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-17 01:16:53.188053 | orchestrator | Friday 17 April 2026 01:16:42 +0000 (0:00:00.436) 0:00:05.574 ********** 2026-04-17 01:16:53.188060 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188066 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:16:53.188072 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:16:53.188078 | orchestrator | 2026-04-17 01:16:53.188095 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:16:53.188102 | orchestrator | Friday 17 April 2026 01:16:42 +0000 (0:00:00.337) 0:00:05.912 ********** 2026-04-17 01:16:53.188110 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188116 | orchestrator | 2026-04-17 01:16:53.188122 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:16:53.188128 | orchestrator | Friday 17 April 2026 01:16:43 +0000 (0:00:00.286) 0:00:06.198 ********** 2026-04-17 01:16:53.188134 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188140 | orchestrator | 2026-04-17 01:16:53.188147 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:16:53.188153 | orchestrator | Friday 17 April 2026 01:16:43 +0000 (0:00:00.265) 0:00:06.464 ********** 2026-04-17 01:16:53.188160 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188191 | orchestrator | 2026-04-17 01:16:53.188197 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:53.188204 | orchestrator | Friday 17 April 2026 01:16:43 +0000 (0:00:00.237) 0:00:06.702 ********** 2026-04-17 01:16:53.188210 | orchestrator | 2026-04-17 01:16:53.188217 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:53.188223 | orchestrator | Friday 17 April 2026 01:16:43 +0000 (0:00:00.068) 0:00:06.770 ********** 2026-04-17 01:16:53.188230 | orchestrator | 2026-04-17 01:16:53.188237 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:53.188243 | orchestrator | Friday 17 April 2026 01:16:43 +0000 (0:00:00.068) 0:00:06.838 ********** 2026-04-17 01:16:53.188249 | orchestrator | 2026-04-17 01:16:53.188264 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:16:53.188271 | orchestrator | Friday 17 April 2026 01:16:44 +0000 (0:00:00.223) 0:00:07.062 ********** 2026-04-17 01:16:53.188278 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188284 | orchestrator | 2026-04-17 01:16:53.188291 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 01:16:53.188298 | orchestrator | Friday 17 April 2026 01:16:44 +0000 (0:00:00.262) 0:00:07.325 ********** 2026-04-17 01:16:53.188304 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188311 | orchestrator | 2026-04-17 01:16:53.188330 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-17 01:16:53.188336 | orchestrator | Friday 17 April 2026 01:16:44 +0000 (0:00:00.257) 0:00:07.583 ********** 2026-04-17 01:16:53.188342 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188348 | orchestrator | 2026-04-17 01:16:53.188355 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-17 01:16:53.188361 | orchestrator | Friday 17 April 2026 01:16:44 +0000 (0:00:00.107) 0:00:07.690 ********** 2026-04-17 01:16:53.188367 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:16:53.188374 | orchestrator | 2026-04-17 01:16:53.188380 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-17 01:16:53.188386 | orchestrator | Friday 17 April 2026 01:16:46 +0000 (0:00:01.647) 0:00:09.338 ********** 2026-04-17 01:16:53.188393 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188399 | orchestrator | 2026-04-17 01:16:53.188405 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-17 01:16:53.188412 | orchestrator | Friday 17 April 2026 01:16:46 +0000 (0:00:00.288) 0:00:09.627 ********** 2026-04-17 01:16:53.188425 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188431 | orchestrator | 2026-04-17 01:16:53.188438 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-17 01:16:53.188443 | orchestrator | Friday 17 April 2026 01:16:46 +0000 (0:00:00.113) 0:00:09.740 ********** 2026-04-17 01:16:53.188450 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188456 | orchestrator | 2026-04-17 01:16:53.188462 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-17 01:16:53.188468 | orchestrator | Friday 17 April 2026 01:16:47 +0000 (0:00:00.315) 0:00:10.056 ********** 2026-04-17 01:16:53.188480 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188486 | orchestrator | 2026-04-17 01:16:53.188489 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-17 01:16:53.188493 | orchestrator | Friday 17 April 2026 01:16:47 +0000 (0:00:00.281) 0:00:10.337 ********** 2026-04-17 01:16:53.188497 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188501 | orchestrator | 2026-04-17 01:16:53.188505 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-17 01:16:53.188508 | orchestrator | Friday 17 April 2026 01:16:47 +0000 (0:00:00.101) 0:00:10.439 ********** 2026-04-17 01:16:53.188512 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188516 | orchestrator | 2026-04-17 01:16:53.188520 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-17 01:16:53.188523 | orchestrator | Friday 17 April 2026 01:16:47 +0000 (0:00:00.119) 0:00:10.559 ********** 2026-04-17 01:16:53.188527 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188531 | orchestrator | 2026-04-17 01:16:53.188535 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-17 01:16:53.188538 | orchestrator | Friday 17 April 2026 01:16:47 +0000 (0:00:00.264) 0:00:10.823 ********** 2026-04-17 01:16:53.188542 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:16:53.188546 | orchestrator | 2026-04-17 01:16:53.188550 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-17 01:16:53.188554 | orchestrator | Friday 17 April 2026 01:16:49 +0000 (0:00:01.450) 0:00:12.274 ********** 2026-04-17 01:16:53.188557 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188561 | orchestrator | 2026-04-17 01:16:53.188565 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-17 01:16:53.188569 | orchestrator | Friday 17 April 2026 01:16:49 +0000 (0:00:00.312) 0:00:12.586 ********** 2026-04-17 01:16:53.188572 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188576 | orchestrator | 2026-04-17 01:16:53.188580 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-17 01:16:53.188583 | orchestrator | Friday 17 April 2026 01:16:49 +0000 (0:00:00.119) 0:00:12.705 ********** 2026-04-17 01:16:53.188587 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:16:53.188591 | orchestrator | 2026-04-17 01:16:53.188595 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-17 01:16:53.188599 | orchestrator | Friday 17 April 2026 01:16:49 +0000 (0:00:00.145) 0:00:12.851 ********** 2026-04-17 01:16:53.188602 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188606 | orchestrator | 2026-04-17 01:16:53.188610 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-17 01:16:53.188614 | orchestrator | Friday 17 April 2026 01:16:49 +0000 (0:00:00.134) 0:00:12.986 ********** 2026-04-17 01:16:53.188617 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188621 | orchestrator | 2026-04-17 01:16:53.188625 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 01:16:53.188629 | orchestrator | Friday 17 April 2026 01:16:50 +0000 (0:00:00.146) 0:00:13.132 ********** 2026-04-17 01:16:53.188632 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.188636 | orchestrator | 2026-04-17 01:16:53.188640 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 01:16:53.188644 | orchestrator | Friday 17 April 2026 01:16:50 +0000 (0:00:00.293) 0:00:13.426 ********** 2026-04-17 01:16:53.188652 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:16:53.188656 | orchestrator | 2026-04-17 01:16:53.188662 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:16:53.188665 | orchestrator | Friday 17 April 2026 01:16:50 +0000 (0:00:00.233) 0:00:13.659 ********** 2026-04-17 01:16:53.188669 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.188673 | orchestrator | 2026-04-17 01:16:53.188677 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:16:53.188681 | orchestrator | Friday 17 April 2026 01:16:52 +0000 (0:00:01.696) 0:00:15.355 ********** 2026-04-17 01:16:53.188684 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.188688 | orchestrator | 2026-04-17 01:16:53.188692 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:16:53.188695 | orchestrator | Friday 17 April 2026 01:16:52 +0000 (0:00:00.263) 0:00:15.619 ********** 2026-04-17 01:16:53.188699 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:53.188703 | orchestrator | 2026-04-17 01:16:53.188710 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:55.332006 | orchestrator | Friday 17 April 2026 01:16:53 +0000 (0:00:00.619) 0:00:16.238 ********** 2026-04-17 01:16:55.332105 | orchestrator | 2026-04-17 01:16:55.332123 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:55.332138 | orchestrator | Friday 17 April 2026 01:16:53 +0000 (0:00:00.069) 0:00:16.307 ********** 2026-04-17 01:16:55.332152 | orchestrator | 2026-04-17 01:16:55.332206 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:16:55.332223 | orchestrator | Friday 17 April 2026 01:16:53 +0000 (0:00:00.077) 0:00:16.385 ********** 2026-04-17 01:16:55.332246 | orchestrator | 2026-04-17 01:16:55.332260 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 01:16:55.332274 | orchestrator | Friday 17 April 2026 01:16:53 +0000 (0:00:00.072) 0:00:16.457 ********** 2026-04-17 01:16:55.332288 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:16:55.332302 | orchestrator | 2026-04-17 01:16:55.332316 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:16:55.332330 | orchestrator | Friday 17 April 2026 01:16:54 +0000 (0:00:01.238) 0:00:17.696 ********** 2026-04-17 01:16:55.332344 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 01:16:55.332358 | orchestrator |  "msg": [ 2026-04-17 01:16:55.332374 | orchestrator |  "Validator run completed.", 2026-04-17 01:16:55.332388 | orchestrator |  "You can find the report file here:", 2026-04-17 01:16:55.332402 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-17T01:16:38+00:00-report.json", 2026-04-17 01:16:55.332417 | orchestrator |  "on the following host:", 2026-04-17 01:16:55.332430 | orchestrator |  "testbed-manager" 2026-04-17 01:16:55.332444 | orchestrator |  ] 2026-04-17 01:16:55.332458 | orchestrator | } 2026-04-17 01:16:55.332472 | orchestrator | 2026-04-17 01:16:55.332486 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:16:55.332501 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-17 01:16:55.332516 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:55.332530 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:16:55.332552 | orchestrator | 2026-04-17 01:16:55.332567 | orchestrator | 2026-04-17 01:16:55.332582 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:16:55.332598 | orchestrator | Friday 17 April 2026 01:16:55 +0000 (0:00:00.403) 0:00:18.099 ********** 2026-04-17 01:16:55.332636 | orchestrator | =============================================================================== 2026-04-17 01:16:55.332651 | orchestrator | Aggregate test results step one ----------------------------------------- 1.70s 2026-04-17 01:16:55.332666 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.65s 2026-04-17 01:16:55.332683 | orchestrator | Get container info ------------------------------------------------------ 1.57s 2026-04-17 01:16:55.332698 | orchestrator | Gather status data ------------------------------------------------------ 1.45s 2026-04-17 01:16:55.332713 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2026-04-17 01:16:55.332727 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-17 01:16:55.332743 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-04-17 01:16:55.332758 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2026-04-17 01:16:55.332773 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.44s 2026-04-17 01:16:55.332788 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-17 01:16:55.332802 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-17 01:16:55.332818 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-17 01:16:55.332832 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.34s 2026-04-17 01:16:55.332845 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-04-17 01:16:55.332859 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-04-17 01:16:55.332872 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-04-17 01:16:55.332887 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-04-17 01:16:55.332901 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-04-17 01:16:55.332915 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-04-17 01:16:55.332928 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2026-04-17 01:16:55.502142 | orchestrator | + osism validate ceph-mgrs 2026-04-17 01:17:24.120729 | orchestrator | 2026-04-17 01:17:24.120827 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-17 01:17:24.120837 | orchestrator | 2026-04-17 01:17:24.120843 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 01:17:24.120850 | orchestrator | Friday 17 April 2026 01:17:10 +0000 (0:00:00.555) 0:00:00.555 ********** 2026-04-17 01:17:24.120858 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.120864 | orchestrator | 2026-04-17 01:17:24.120871 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 01:17:24.120878 | orchestrator | Friday 17 April 2026 01:17:11 +0000 (0:00:00.994) 0:00:01.549 ********** 2026-04-17 01:17:24.120885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.120892 | orchestrator | 2026-04-17 01:17:24.120899 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 01:17:24.120906 | orchestrator | Friday 17 April 2026 01:17:12 +0000 (0:00:00.683) 0:00:02.233 ********** 2026-04-17 01:17:24.120913 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.120921 | orchestrator | 2026-04-17 01:17:24.120928 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-17 01:17:24.120951 | orchestrator | Friday 17 April 2026 01:17:12 +0000 (0:00:00.141) 0:00:02.375 ********** 2026-04-17 01:17:24.120958 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.120965 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:17:24.120971 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:17:24.120978 | orchestrator | 2026-04-17 01:17:24.120984 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-17 01:17:24.120991 | orchestrator | Friday 17 April 2026 01:17:12 +0000 (0:00:00.277) 0:00:02.652 ********** 2026-04-17 01:17:24.121014 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:17:24.121020 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121027 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:17:24.121033 | orchestrator | 2026-04-17 01:17:24.121039 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-17 01:17:24.121046 | orchestrator | Friday 17 April 2026 01:17:13 +0000 (0:00:01.358) 0:00:04.011 ********** 2026-04-17 01:17:24.121051 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121057 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:17:24.121064 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:17:24.121069 | orchestrator | 2026-04-17 01:17:24.121078 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-17 01:17:24.121085 | orchestrator | Friday 17 April 2026 01:17:14 +0000 (0:00:00.288) 0:00:04.299 ********** 2026-04-17 01:17:24.121111 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121117 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:17:24.121123 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:17:24.121130 | orchestrator | 2026-04-17 01:17:24.121136 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:17:24.121142 | orchestrator | Friday 17 April 2026 01:17:14 +0000 (0:00:00.319) 0:00:04.619 ********** 2026-04-17 01:17:24.121148 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121168 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:17:24.121174 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:17:24.121179 | orchestrator | 2026-04-17 01:17:24.121186 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-17 01:17:24.121192 | orchestrator | Friday 17 April 2026 01:17:14 +0000 (0:00:00.341) 0:00:04.961 ********** 2026-04-17 01:17:24.121199 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121204 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:17:24.121210 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:17:24.121217 | orchestrator | 2026-04-17 01:17:24.121223 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-17 01:17:24.121230 | orchestrator | Friday 17 April 2026 01:17:15 +0000 (0:00:00.434) 0:00:05.395 ********** 2026-04-17 01:17:24.121236 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121242 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:17:24.121248 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:17:24.121254 | orchestrator | 2026-04-17 01:17:24.121261 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:17:24.121267 | orchestrator | Friday 17 April 2026 01:17:15 +0000 (0:00:00.291) 0:00:05.687 ********** 2026-04-17 01:17:24.121274 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121280 | orchestrator | 2026-04-17 01:17:24.121286 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:17:24.121292 | orchestrator | Friday 17 April 2026 01:17:15 +0000 (0:00:00.258) 0:00:05.946 ********** 2026-04-17 01:17:24.121298 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121305 | orchestrator | 2026-04-17 01:17:24.121311 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:17:24.121317 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.225) 0:00:06.171 ********** 2026-04-17 01:17:24.121324 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121330 | orchestrator | 2026-04-17 01:17:24.121336 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121342 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.236) 0:00:06.408 ********** 2026-04-17 01:17:24.121349 | orchestrator | 2026-04-17 01:17:24.121355 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121361 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.067) 0:00:06.475 ********** 2026-04-17 01:17:24.121367 | orchestrator | 2026-04-17 01:17:24.121373 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121380 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.066) 0:00:06.542 ********** 2026-04-17 01:17:24.121393 | orchestrator | 2026-04-17 01:17:24.121399 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:17:24.121405 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.204) 0:00:06.747 ********** 2026-04-17 01:17:24.121411 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121418 | orchestrator | 2026-04-17 01:17:24.121424 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-17 01:17:24.121430 | orchestrator | Friday 17 April 2026 01:17:16 +0000 (0:00:00.259) 0:00:07.006 ********** 2026-04-17 01:17:24.121437 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121443 | orchestrator | 2026-04-17 01:17:24.121464 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-17 01:17:24.121470 | orchestrator | Friday 17 April 2026 01:17:17 +0000 (0:00:00.250) 0:00:07.257 ********** 2026-04-17 01:17:24.121476 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121482 | orchestrator | 2026-04-17 01:17:24.121489 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-17 01:17:24.121495 | orchestrator | Friday 17 April 2026 01:17:17 +0000 (0:00:00.121) 0:00:07.378 ********** 2026-04-17 01:17:24.121501 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:17:24.121507 | orchestrator | 2026-04-17 01:17:24.121514 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-17 01:17:24.121520 | orchestrator | Friday 17 April 2026 01:17:18 +0000 (0:00:01.657) 0:00:09.035 ********** 2026-04-17 01:17:24.121526 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121532 | orchestrator | 2026-04-17 01:17:24.121539 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-17 01:17:24.121545 | orchestrator | Friday 17 April 2026 01:17:19 +0000 (0:00:00.253) 0:00:09.289 ********** 2026-04-17 01:17:24.121551 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121557 | orchestrator | 2026-04-17 01:17:24.121563 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-17 01:17:24.121570 | orchestrator | Friday 17 April 2026 01:17:19 +0000 (0:00:00.287) 0:00:09.577 ********** 2026-04-17 01:17:24.121576 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121581 | orchestrator | 2026-04-17 01:17:24.121587 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-17 01:17:24.121593 | orchestrator | Friday 17 April 2026 01:17:19 +0000 (0:00:00.144) 0:00:09.721 ********** 2026-04-17 01:17:24.121598 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:17:24.121604 | orchestrator | 2026-04-17 01:17:24.121610 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 01:17:24.121615 | orchestrator | Friday 17 April 2026 01:17:19 +0000 (0:00:00.134) 0:00:09.856 ********** 2026-04-17 01:17:24.121622 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.121628 | orchestrator | 2026-04-17 01:17:24.121634 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 01:17:24.121641 | orchestrator | Friday 17 April 2026 01:17:19 +0000 (0:00:00.243) 0:00:10.099 ********** 2026-04-17 01:17:24.121651 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:17:24.121657 | orchestrator | 2026-04-17 01:17:24.121664 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:17:24.121670 | orchestrator | Friday 17 April 2026 01:17:20 +0000 (0:00:00.250) 0:00:10.350 ********** 2026-04-17 01:17:24.121676 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.121683 | orchestrator | 2026-04-17 01:17:24.121689 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:17:24.121695 | orchestrator | Friday 17 April 2026 01:17:21 +0000 (0:00:01.475) 0:00:11.825 ********** 2026-04-17 01:17:24.121701 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.121707 | orchestrator | 2026-04-17 01:17:24.121714 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:17:24.121720 | orchestrator | Friday 17 April 2026 01:17:21 +0000 (0:00:00.259) 0:00:12.085 ********** 2026-04-17 01:17:24.121731 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.121737 | orchestrator | 2026-04-17 01:17:24.121743 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121750 | orchestrator | Friday 17 April 2026 01:17:22 +0000 (0:00:00.270) 0:00:12.356 ********** 2026-04-17 01:17:24.121756 | orchestrator | 2026-04-17 01:17:24.121761 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121767 | orchestrator | Friday 17 April 2026 01:17:22 +0000 (0:00:00.069) 0:00:12.425 ********** 2026-04-17 01:17:24.121773 | orchestrator | 2026-04-17 01:17:24.121779 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:24.121785 | orchestrator | Friday 17 April 2026 01:17:22 +0000 (0:00:00.068) 0:00:12.494 ********** 2026-04-17 01:17:24.121791 | orchestrator | 2026-04-17 01:17:24.121798 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 01:17:24.121803 | orchestrator | Friday 17 April 2026 01:17:22 +0000 (0:00:00.073) 0:00:12.568 ********** 2026-04-17 01:17:24.121809 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:24.121815 | orchestrator | 2026-04-17 01:17:24.121821 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:17:24.121827 | orchestrator | Friday 17 April 2026 01:17:23 +0000 (0:00:01.292) 0:00:13.860 ********** 2026-04-17 01:17:24.121833 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-17 01:17:24.121840 | orchestrator |  "msg": [ 2026-04-17 01:17:24.121847 | orchestrator |  "Validator run completed.", 2026-04-17 01:17:24.121853 | orchestrator |  "You can find the report file here:", 2026-04-17 01:17:24.121860 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-17T01:17:11+00:00-report.json", 2026-04-17 01:17:24.121869 | orchestrator |  "on the following host:", 2026-04-17 01:17:24.121876 | orchestrator |  "testbed-manager" 2026-04-17 01:17:24.121883 | orchestrator |  ] 2026-04-17 01:17:24.121890 | orchestrator | } 2026-04-17 01:17:24.121896 | orchestrator | 2026-04-17 01:17:24.121903 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:17:24.121911 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 01:17:24.121919 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:17:24.121931 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:17:24.424461 | orchestrator | 2026-04-17 01:17:24.424543 | orchestrator | 2026-04-17 01:17:24.424553 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:17:24.424564 | orchestrator | Friday 17 April 2026 01:17:24 +0000 (0:00:00.405) 0:00:14.266 ********** 2026-04-17 01:17:24.424570 | orchestrator | =============================================================================== 2026-04-17 01:17:24.424574 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.66s 2026-04-17 01:17:24.424578 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2026-04-17 01:17:24.424583 | orchestrator | Get container info ------------------------------------------------------ 1.36s 2026-04-17 01:17:24.424587 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-04-17 01:17:24.424591 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-04-17 01:17:24.424594 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-04-17 01:17:24.424598 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.43s 2026-04-17 01:17:24.424602 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-17 01:17:24.424623 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-17 01:17:24.424627 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2026-04-17 01:17:24.424631 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-04-17 01:17:24.424635 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2026-04-17 01:17:24.424638 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-04-17 01:17:24.424642 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2026-04-17 01:17:24.424646 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-17 01:17:24.424649 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-04-17 01:17:24.424654 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-17 01:17:24.424658 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-04-17 01:17:24.424662 | orchestrator | Aggregate test results step one ----------------------------------------- 0.26s 2026-04-17 01:17:24.424665 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2026-04-17 01:17:24.590281 | orchestrator | + osism validate ceph-osds 2026-04-17 01:17:43.514618 | orchestrator | 2026-04-17 01:17:43.514712 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-17 01:17:43.514722 | orchestrator | 2026-04-17 01:17:43.514728 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-17 01:17:43.514735 | orchestrator | Friday 17 April 2026 01:17:39 +0000 (0:00:00.495) 0:00:00.495 ********** 2026-04-17 01:17:43.514742 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:43.514749 | orchestrator | 2026-04-17 01:17:43.514755 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-17 01:17:43.514761 | orchestrator | Friday 17 April 2026 01:17:40 +0000 (0:00:00.988) 0:00:01.484 ********** 2026-04-17 01:17:43.514768 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:43.514774 | orchestrator | 2026-04-17 01:17:43.514780 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-17 01:17:43.514786 | orchestrator | Friday 17 April 2026 01:17:40 +0000 (0:00:00.251) 0:00:01.735 ********** 2026-04-17 01:17:43.514792 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:17:43.514798 | orchestrator | 2026-04-17 01:17:43.514804 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-17 01:17:43.514810 | orchestrator | Friday 17 April 2026 01:17:41 +0000 (0:00:00.691) 0:00:02.427 ********** 2026-04-17 01:17:43.514818 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:43.514827 | orchestrator | 2026-04-17 01:17:43.514834 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 01:17:43.514840 | orchestrator | Friday 17 April 2026 01:17:41 +0000 (0:00:00.131) 0:00:02.559 ********** 2026-04-17 01:17:43.514847 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:43.514852 | orchestrator | 2026-04-17 01:17:43.514859 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 01:17:43.514864 | orchestrator | Friday 17 April 2026 01:17:41 +0000 (0:00:00.122) 0:00:02.682 ********** 2026-04-17 01:17:43.514870 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:43.514876 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:43.514886 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:43.514894 | orchestrator | 2026-04-17 01:17:43.514900 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-17 01:17:43.514906 | orchestrator | Friday 17 April 2026 01:17:42 +0000 (0:00:00.436) 0:00:03.118 ********** 2026-04-17 01:17:43.514912 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:43.514918 | orchestrator | 2026-04-17 01:17:43.514925 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-17 01:17:43.514956 | orchestrator | Friday 17 April 2026 01:17:42 +0000 (0:00:00.149) 0:00:03.268 ********** 2026-04-17 01:17:43.514961 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:43.514965 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:43.514969 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:43.514972 | orchestrator | 2026-04-17 01:17:43.514976 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-17 01:17:43.514980 | orchestrator | Friday 17 April 2026 01:17:42 +0000 (0:00:00.312) 0:00:03.581 ********** 2026-04-17 01:17:43.514984 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:43.514988 | orchestrator | 2026-04-17 01:17:43.515006 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:17:43.515010 | orchestrator | Friday 17 April 2026 01:17:43 +0000 (0:00:00.372) 0:00:03.953 ********** 2026-04-17 01:17:43.515014 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:43.515017 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:43.515021 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:43.515025 | orchestrator | 2026-04-17 01:17:43.515029 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-17 01:17:43.515033 | orchestrator | Friday 17 April 2026 01:17:43 +0000 (0:00:00.276) 0:00:04.230 ********** 2026-04-17 01:17:43.515039 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f2f1dc39f43b7ddcd6e1519fd184a3670b4a233ef7b65f304abb9112782e8ecb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-17 01:17:43.515056 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fc42525d7c79a3ad4d5c47c44ca6679a98884cb7eb8243edbe5d76e3930d87b8', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.515063 | orchestrator | skipping: [testbed-node-3] => (item={'id': '709b68885df5a94d0c5d3e855cffe11ca7c064a206ad33ddba9e1fde0b843f1a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.515074 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fd28aad95f5db9c399b0f092256fd09a04c1aa6eacd62a018b62950c6341e21', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-17 01:17:43.515088 | orchestrator | skipping: [testbed-node-3] => (item={'id': '39c383c42dd348d93a7c7eccf1a07a48854dbbcd0778660e53276c2ca303affc', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.515109 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fca1cf613581c102d225f246b8dd425e6ff980ad8b1725b1ebc573e29a518a96', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.515116 | orchestrator | skipping: [testbed-node-3] => (item={'id': '45cc60190bca8c50c32b6abf63c3a8335ec3780bd94a2534e9e3b484343f61de', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-17 01:17:43.515126 | orchestrator | skipping: [testbed-node-3] => (item={'id': '52deb382ac36ecf68b2d92bb0d3d811c8505ec00516aa99a4920bd3d23a5e110', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-17 01:17:43.515133 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9233f80318cda1feec2463b730db44a478d25ed90366b40a595f0229d6f8cc1a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-17 01:17:43.515141 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5e98718e687bf4cbfc7992a27f590ebbf459dbbfc8eda56c0ac35f601a720b4b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-17 01:17:43.515197 | orchestrator | ok: [testbed-node-3] => (item={'id': '726e86824abaf4dd1a8179895e032478250d7f1642c420f164781681972f2424', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.515204 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b3f4c044c5e705fe4a793135311c8a5cec3e3c43afc31ffcd64bb669fd7dfc19', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.515209 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11840e524c94eeaf53ea28e522dabdca8d90558cdb2bd71ce1ad114183fef9b7', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-17 01:17:43.515214 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25dc2b92d3ff28e66466a3663b622914b79e6662acbb520ab5d5d1470ffef5ff', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 25 minutes (healthy)'})  2026-04-17 01:17:43.515218 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f4c60209958ba6744a6d53978b4dfea3900901c93f4c3e11d82c516f9dee391', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-17 01:17:43.515223 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'edb2642b94d1614d2d8a0c4f57a8ec0178aaaa8c43300f57831e48cdbb245ac7', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.515227 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5acfaadfdcb44339fd5b4a3ca0744341b558d8d1214f3de0bfd7e41299c74450', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.515231 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c970b66b60b2dd9143dfe772e4b4420d620d3968e42fb369a1496e089cb6a96', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-17 01:17:43.515236 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6cd24a39fba25af57bde1113f037a7c8f024df16b4775b26c0d4fd20b9eba597', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-17 01:17:43.515240 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd2de9162e12736f086f7a248d041bb76bb4dc88da7730c82ecb6eb8713e16b97', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.515248 | orchestrator | skipping: [testbed-node-4] => (item={'id': '612ad7dfea946ec8dcd4330c62cca4f08ddacbcb66845c76376c0ea29a0a1950', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.515258 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f08dc0cce0bc9e69f543a8282d211e4fc8416ab65811ab2bfa8338352fbd9a21', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-17 01:17:43.713532 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5b3672241bb002f8098ba0962d78566b41dce049d66fc536685e2baadb5e5e1d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.713608 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd744c15becf31db311e269781181d826e3ac6a5334ca30751388985d25254c03', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.713638 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2900300e96c30db634f70f6213320b77cd3e7d4f732ffb8a57448a3ad84ab14c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-17 01:17:43.713648 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5ac6322e81c388219c893680bca3480e9c063b10f722a0fc0d1b0776b9da1a7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-17 01:17:43.713654 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0aee062e15bae4ad98a9b13c5d370c8bda69fabc4dd7ed269373f567b43d968', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-17 01:17:43.713661 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2853da871c48a751659fb541610aa0f8d246b374b2025241043b310f598d3eea', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-17 01:17:43.713672 | orchestrator | ok: [testbed-node-4] => (item={'id': '5360a0a167d0f854b5aec4d86c5755cff649d7d8b3b446ff15ddc25d20e5cc7c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.713680 | orchestrator | ok: [testbed-node-4] => (item={'id': '53af9d21c3de4955d21984e3ab67bd3223ea193e5dbe12b1b19e3d7745b29547', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.713686 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e09b70b528c82c474403f0abba377c9be7031db1b99bfb4c5486280f3c606a96', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-17 01:17:43.713692 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ceb3e128fc192f684038932227cddb7f8508f015285c07d3dd0f0b76050e5771', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 25 minutes (healthy)'})  2026-04-17 01:17:43.713698 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90a1e9abd73a9dfe1de9e4aaf7b130217216cb00a78a6045debf43f718b2a85c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-17 01:17:43.713706 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a04b05c771198148b26c20b51f027859bf3b596e1a7bc78e9f85edf7510778e6', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.713712 | orchestrator | skipping: [testbed-node-4] => (item={'id': '137671fae7d094fccd9d5f59ff3d08bc78b3f60416dc89014fcff4ecb4907a2c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.713719 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2eff07175d478ba50d29905abd5c502cb53180949f3b4fbb6297ffa35206f39', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-17 01:17:43.713726 | orchestrator | skipping: [testbed-node-5] => (item={'id': '356c05d427098a467a49c46259e136f84115c3598d90f1d9ea81de1505d2c1f0', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-17 01:17:43.713747 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0943c78c75904efe138c3afd3743c5ba1d40a9aa9c6189b7fbf5add1be9dce44', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.713761 | orchestrator | skipping: [testbed-node-5] => (item={'id': '429bb2b4b358a53720439050187f7d02873fb156b1ea044312fb15991ea196a7', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-17 01:17:43.713768 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83f6826d1daaab7251e7556de6201ed5f5ba695aedbdf30bd4db83f950c3c620', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-17 01:17:43.713775 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1103ee4147300bd5c1d82dca2b781f04d7b2382c8242570a0e3797028c1f2cc9', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.713781 | orchestrator | skipping: [testbed-node-5] => (item={'id': '205e01ad71d01823095d81c2cfddf196bef89a7a76cf6884a3cf8787977c0f10', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-17 01:17:43.713788 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e69aab4d53e464216bfea262c99816a6373565ddb76ee7c7cb789402794ebae', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-17 01:17:43.713795 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5146479f8c6fe1c854d86cc53e93a28138de8f1973fb2b2670e47e8ce4a35fd4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-17 01:17:43.713801 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c847a96670ce0fa403cfed819e84e93d659c07c9a3aab55199623c9b1a07fb72', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-17 01:17:43.713808 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3349054dcb835e46ee36c89b2dce2330aab24dc80a13bc056b1671e6b77268a5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-17 01:17:43.713830 | orchestrator | ok: [testbed-node-5] => (item={'id': '891f4aee38dcc727ce9bf7f0a031878e27af06315b03f3643a0dca132571056e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.713836 | orchestrator | ok: [testbed-node-5] => (item={'id': '1833e36db1f5baa99a746b8842eb178a61d5d6ae39da64934adde182b052132a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-17 01:17:43.713843 | orchestrator | skipping: [testbed-node-5] => (item={'id': '38de413ff61dc19d3a7067d2dfea5adde4b56d89fcadba1e0bfe4c76436ee001', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-17 01:17:43.713849 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ffe4c954e8a7c015138cb80d6f2929a0c850f6fd00ad8f8c74d4f9efe56b26b9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 25 minutes (healthy)'})  2026-04-17 01:17:43.713855 | orchestrator | skipping: [testbed-node-5] => (item={'id': '10fcbf020fea1c008284efb7fd653dc6d8837a702388b4bee7c2f6900619036e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-04-17 01:17:43.713867 | orchestrator | skipping: [testbed-node-5] => (item={'id': '739a08d8383054d073eb0d9e04cabda4a128155b0cdf56179b60c532b62b4afc', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.713878 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8bbdbaae5c7730d7e088f58b2d4ea7825d1ba51effb9bb3e52d4f3ff84af3cac', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-17 01:17:43.713891 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af1cff086aefb0ba20584658bef64aa07a785441a439a3cc71f15b00127947bc', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-17 01:17:55.932587 | orchestrator | 2026-04-17 01:17:55.932678 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-17 01:17:55.932686 | orchestrator | Friday 17 April 2026 01:17:43 +0000 (0:00:00.620) 0:00:04.850 ********** 2026-04-17 01:17:55.932691 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.932696 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.932700 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.932704 | orchestrator | 2026-04-17 01:17:55.932708 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-17 01:17:55.932712 | orchestrator | Friday 17 April 2026 01:17:44 +0000 (0:00:00.293) 0:00:05.143 ********** 2026-04-17 01:17:55.932716 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932721 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.932725 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.932729 | orchestrator | 2026-04-17 01:17:55.932733 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-17 01:17:55.932737 | orchestrator | Friday 17 April 2026 01:17:44 +0000 (0:00:00.299) 0:00:05.442 ********** 2026-04-17 01:17:55.932741 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.932745 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.932749 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.932752 | orchestrator | 2026-04-17 01:17:55.932756 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:17:55.932760 | orchestrator | Friday 17 April 2026 01:17:44 +0000 (0:00:00.303) 0:00:05.746 ********** 2026-04-17 01:17:55.932764 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.932768 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.932771 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.932775 | orchestrator | 2026-04-17 01:17:55.932779 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-17 01:17:55.932783 | orchestrator | Friday 17 April 2026 01:17:45 +0000 (0:00:00.410) 0:00:06.157 ********** 2026-04-17 01:17:55.932787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-17 01:17:55.932792 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-17 01:17:55.932796 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932800 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-17 01:17:55.932803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-17 01:17:55.932807 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.932811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-17 01:17:55.932815 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-17 01:17:55.932819 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.932822 | orchestrator | 2026-04-17 01:17:55.932826 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-17 01:17:55.932830 | orchestrator | Friday 17 April 2026 01:17:45 +0000 (0:00:00.302) 0:00:06.459 ********** 2026-04-17 01:17:55.932834 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.932838 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.932858 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.932862 | orchestrator | 2026-04-17 01:17:55.932866 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 01:17:55.932869 | orchestrator | Friday 17 April 2026 01:17:45 +0000 (0:00:00.281) 0:00:06.741 ********** 2026-04-17 01:17:55.932873 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932877 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.932881 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.932884 | orchestrator | 2026-04-17 01:17:55.932888 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-17 01:17:55.932892 | orchestrator | Friday 17 April 2026 01:17:46 +0000 (0:00:00.274) 0:00:07.016 ********** 2026-04-17 01:17:55.932896 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932899 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.932903 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.932907 | orchestrator | 2026-04-17 01:17:55.932911 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-17 01:17:55.932914 | orchestrator | Friday 17 April 2026 01:17:46 +0000 (0:00:00.408) 0:00:07.424 ********** 2026-04-17 01:17:55.932920 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.932926 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.932933 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.932937 | orchestrator | 2026-04-17 01:17:55.932941 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:17:55.932945 | orchestrator | Friday 17 April 2026 01:17:46 +0000 (0:00:00.281) 0:00:07.705 ********** 2026-04-17 01:17:55.932949 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932953 | orchestrator | 2026-04-17 01:17:55.932956 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:17:55.932960 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.247) 0:00:07.953 ********** 2026-04-17 01:17:55.932975 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932979 | orchestrator | 2026-04-17 01:17:55.932983 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:17:55.932987 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.235) 0:00:08.188 ********** 2026-04-17 01:17:55.932990 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.932994 | orchestrator | 2026-04-17 01:17:55.932998 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:55.933002 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.245) 0:00:08.434 ********** 2026-04-17 01:17:55.933005 | orchestrator | 2026-04-17 01:17:55.933009 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:55.933013 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.067) 0:00:08.502 ********** 2026-04-17 01:17:55.933017 | orchestrator | 2026-04-17 01:17:55.933021 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:17:55.933034 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.064) 0:00:08.566 ********** 2026-04-17 01:17:55.933038 | orchestrator | 2026-04-17 01:17:55.933042 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:17:55.933046 | orchestrator | Friday 17 April 2026 01:17:47 +0000 (0:00:00.074) 0:00:08.641 ********** 2026-04-17 01:17:55.933050 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933053 | orchestrator | 2026-04-17 01:17:55.933057 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-17 01:17:55.933061 | orchestrator | Friday 17 April 2026 01:17:48 +0000 (0:00:00.581) 0:00:09.222 ********** 2026-04-17 01:17:55.933064 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933068 | orchestrator | 2026-04-17 01:17:55.933072 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:17:55.933075 | orchestrator | Friday 17 April 2026 01:17:48 +0000 (0:00:00.240) 0:00:09.463 ********** 2026-04-17 01:17:55.933079 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933083 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.933092 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.933095 | orchestrator | 2026-04-17 01:17:55.933099 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-17 01:17:55.933103 | orchestrator | Friday 17 April 2026 01:17:48 +0000 (0:00:00.307) 0:00:09.771 ********** 2026-04-17 01:17:55.933107 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933111 | orchestrator | 2026-04-17 01:17:55.933114 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-17 01:17:55.933118 | orchestrator | Friday 17 April 2026 01:17:49 +0000 (0:00:00.232) 0:00:10.003 ********** 2026-04-17 01:17:55.933122 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-17 01:17:55.933127 | orchestrator | 2026-04-17 01:17:55.933131 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-17 01:17:55.933136 | orchestrator | Friday 17 April 2026 01:17:50 +0000 (0:00:01.802) 0:00:11.806 ********** 2026-04-17 01:17:55.933160 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933165 | orchestrator | 2026-04-17 01:17:55.933169 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-17 01:17:55.933173 | orchestrator | Friday 17 April 2026 01:17:51 +0000 (0:00:00.128) 0:00:11.935 ********** 2026-04-17 01:17:55.933177 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933182 | orchestrator | 2026-04-17 01:17:55.933186 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-17 01:17:55.933191 | orchestrator | Friday 17 April 2026 01:17:51 +0000 (0:00:00.283) 0:00:12.218 ********** 2026-04-17 01:17:55.933195 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933200 | orchestrator | 2026-04-17 01:17:55.933206 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-17 01:17:55.933213 | orchestrator | Friday 17 April 2026 01:17:51 +0000 (0:00:00.113) 0:00:12.331 ********** 2026-04-17 01:17:55.933217 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933222 | orchestrator | 2026-04-17 01:17:55.933226 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:17:55.933230 | orchestrator | Friday 17 April 2026 01:17:51 +0000 (0:00:00.135) 0:00:12.467 ********** 2026-04-17 01:17:55.933234 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933238 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.933243 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.933247 | orchestrator | 2026-04-17 01:17:55.933252 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-17 01:17:55.933263 | orchestrator | Friday 17 April 2026 01:17:52 +0000 (0:00:00.441) 0:00:12.908 ********** 2026-04-17 01:17:55.933268 | orchestrator | changed: [testbed-node-3] 2026-04-17 01:17:55.933278 | orchestrator | changed: [testbed-node-5] 2026-04-17 01:17:55.933283 | orchestrator | changed: [testbed-node-4] 2026-04-17 01:17:55.933287 | orchestrator | 2026-04-17 01:17:55.933291 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-17 01:17:55.933296 | orchestrator | Friday 17 April 2026 01:17:53 +0000 (0:00:01.651) 0:00:14.560 ********** 2026-04-17 01:17:55.933300 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933304 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.933308 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.933312 | orchestrator | 2026-04-17 01:17:55.933317 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-17 01:17:55.933321 | orchestrator | Friday 17 April 2026 01:17:53 +0000 (0:00:00.325) 0:00:14.885 ********** 2026-04-17 01:17:55.933325 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933330 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.933334 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.933338 | orchestrator | 2026-04-17 01:17:55.933342 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-17 01:17:55.933347 | orchestrator | Friday 17 April 2026 01:17:54 +0000 (0:00:00.467) 0:00:15.353 ********** 2026-04-17 01:17:55.933351 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933355 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.933364 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.933368 | orchestrator | 2026-04-17 01:17:55.933373 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-17 01:17:55.933380 | orchestrator | Friday 17 April 2026 01:17:54 +0000 (0:00:00.458) 0:00:15.811 ********** 2026-04-17 01:17:55.933389 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:17:55.933393 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:17:55.933400 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:17:55.933407 | orchestrator | 2026-04-17 01:17:55.933413 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-17 01:17:55.933419 | orchestrator | Friday 17 April 2026 01:17:55 +0000 (0:00:00.325) 0:00:16.137 ********** 2026-04-17 01:17:55.933425 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933434 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.933443 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.933448 | orchestrator | 2026-04-17 01:17:55.933455 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-17 01:17:55.933460 | orchestrator | Friday 17 April 2026 01:17:55 +0000 (0:00:00.270) 0:00:16.407 ********** 2026-04-17 01:17:55.933466 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:17:55.933473 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:17:55.933478 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:17:55.933484 | orchestrator | 2026-04-17 01:17:55.933496 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-17 01:18:02.907316 | orchestrator | Friday 17 April 2026 01:17:55 +0000 (0:00:00.425) 0:00:16.833 ********** 2026-04-17 01:18:02.907418 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:18:02.907479 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:18:02.907488 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:18:02.907494 | orchestrator | 2026-04-17 01:18:02.907502 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-17 01:18:02.907509 | orchestrator | Friday 17 April 2026 01:17:56 +0000 (0:00:00.507) 0:00:17.340 ********** 2026-04-17 01:18:02.907515 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:18:02.907522 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:18:02.907529 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:18:02.907535 | orchestrator | 2026-04-17 01:18:02.907542 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-17 01:18:02.907548 | orchestrator | Friday 17 April 2026 01:17:56 +0000 (0:00:00.518) 0:00:17.858 ********** 2026-04-17 01:18:02.907555 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:18:02.907561 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:18:02.907567 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:18:02.907574 | orchestrator | 2026-04-17 01:18:02.907580 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-17 01:18:02.907586 | orchestrator | Friday 17 April 2026 01:17:57 +0000 (0:00:00.290) 0:00:18.149 ********** 2026-04-17 01:18:02.907593 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:18:02.907600 | orchestrator | skipping: [testbed-node-4] 2026-04-17 01:18:02.907607 | orchestrator | skipping: [testbed-node-5] 2026-04-17 01:18:02.907614 | orchestrator | 2026-04-17 01:18:02.907619 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-17 01:18:02.907626 | orchestrator | Friday 17 April 2026 01:17:57 +0000 (0:00:00.488) 0:00:18.637 ********** 2026-04-17 01:18:02.907633 | orchestrator | ok: [testbed-node-3] 2026-04-17 01:18:02.907639 | orchestrator | ok: [testbed-node-4] 2026-04-17 01:18:02.907645 | orchestrator | ok: [testbed-node-5] 2026-04-17 01:18:02.907651 | orchestrator | 2026-04-17 01:18:02.907658 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-17 01:18:02.907665 | orchestrator | Friday 17 April 2026 01:17:58 +0000 (0:00:00.288) 0:00:18.926 ********** 2026-04-17 01:18:02.907671 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:18:02.907678 | orchestrator | 2026-04-17 01:18:02.907684 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-17 01:18:02.907710 | orchestrator | Friday 17 April 2026 01:17:58 +0000 (0:00:00.259) 0:00:19.185 ********** 2026-04-17 01:18:02.907717 | orchestrator | skipping: [testbed-node-3] 2026-04-17 01:18:02.907723 | orchestrator | 2026-04-17 01:18:02.907730 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-17 01:18:02.907736 | orchestrator | Friday 17 April 2026 01:17:58 +0000 (0:00:00.240) 0:00:19.425 ********** 2026-04-17 01:18:02.907742 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:18:02.907748 | orchestrator | 2026-04-17 01:18:02.907754 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-17 01:18:02.907761 | orchestrator | Friday 17 April 2026 01:18:00 +0000 (0:00:01.674) 0:00:21.100 ********** 2026-04-17 01:18:02.907767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:18:02.907772 | orchestrator | 2026-04-17 01:18:02.907778 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-17 01:18:02.907784 | orchestrator | Friday 17 April 2026 01:18:00 +0000 (0:00:00.263) 0:00:21.364 ********** 2026-04-17 01:18:02.907790 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:18:02.907797 | orchestrator | 2026-04-17 01:18:02.907803 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:18:02.907809 | orchestrator | Friday 17 April 2026 01:18:00 +0000 (0:00:00.242) 0:00:21.606 ********** 2026-04-17 01:18:02.907816 | orchestrator | 2026-04-17 01:18:02.907821 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:18:02.907828 | orchestrator | Friday 17 April 2026 01:18:00 +0000 (0:00:00.070) 0:00:21.676 ********** 2026-04-17 01:18:02.907834 | orchestrator | 2026-04-17 01:18:02.907840 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-17 01:18:02.907846 | orchestrator | Friday 17 April 2026 01:18:00 +0000 (0:00:00.212) 0:00:21.889 ********** 2026-04-17 01:18:02.907853 | orchestrator | 2026-04-17 01:18:02.907860 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-17 01:18:02.907867 | orchestrator | Friday 17 April 2026 01:18:01 +0000 (0:00:00.067) 0:00:21.957 ********** 2026-04-17 01:18:02.907873 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-17 01:18:02.907883 | orchestrator | 2026-04-17 01:18:02.907895 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-17 01:18:02.907904 | orchestrator | Friday 17 April 2026 01:18:02 +0000 (0:00:01.217) 0:00:23.174 ********** 2026-04-17 01:18:02.907913 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-17 01:18:02.907920 | orchestrator |  "msg": [ 2026-04-17 01:18:02.907927 | orchestrator |  "Validator run completed.", 2026-04-17 01:18:02.907939 | orchestrator |  "You can find the report file here:", 2026-04-17 01:18:02.907946 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-17T01:17:40+00:00-report.json", 2026-04-17 01:18:02.907953 | orchestrator |  "on the following host:", 2026-04-17 01:18:02.907961 | orchestrator |  "testbed-manager" 2026-04-17 01:18:02.907967 | orchestrator |  ] 2026-04-17 01:18:02.907981 | orchestrator | } 2026-04-17 01:18:02.907990 | orchestrator | 2026-04-17 01:18:02.907999 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:18:02.908007 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-17 01:18:02.908016 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 01:18:02.908042 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-17 01:18:02.908049 | orchestrator | 2026-04-17 01:18:02.908061 | orchestrator | 2026-04-17 01:18:02.908071 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:18:02.908132 | orchestrator | Friday 17 April 2026 01:18:02 +0000 (0:00:00.380) 0:00:23.555 ********** 2026-04-17 01:18:02.908163 | orchestrator | =============================================================================== 2026-04-17 01:18:02.908170 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.80s 2026-04-17 01:18:02.908176 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2026-04-17 01:18:02.908182 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.65s 2026-04-17 01:18:02.908188 | orchestrator | Write report file ------------------------------------------------------- 1.22s 2026-04-17 01:18:02.908194 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-04-17 01:18:02.908200 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-04-17 01:18:02.908206 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2026-04-17 01:18:02.908212 | orchestrator | Print report file information ------------------------------------------- 0.58s 2026-04-17 01:18:02.908218 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.52s 2026-04-17 01:18:02.908223 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-04-17 01:18:02.908228 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.49s 2026-04-17 01:18:02.908234 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2026-04-17 01:18:02.908240 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.46s 2026-04-17 01:18:02.908246 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2026-04-17 01:18:02.908253 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.44s 2026-04-17 01:18:02.908258 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.43s 2026-04-17 01:18:02.908265 | orchestrator | Prepare test data ------------------------------------------------------- 0.41s 2026-04-17 01:18:02.908271 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.41s 2026-04-17 01:18:02.908278 | orchestrator | Print report file information ------------------------------------------- 0.38s 2026-04-17 01:18:02.908284 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.37s 2026-04-17 01:18:03.104294 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-17 01:18:03.112833 | orchestrator | + set -e 2026-04-17 01:18:03.112901 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 01:18:03.112908 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 01:18:03.112913 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 01:18:03.114606 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 01:18:03.114664 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 01:18:03.114674 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 01:18:03.114683 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 01:18:03.114690 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:18:03.114697 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:18:03.114704 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 01:18:03.114711 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 01:18:03.114718 | orchestrator | ++ export ARA=false 2026-04-17 01:18:03.114726 | orchestrator | ++ ARA=false 2026-04-17 01:18:03.114733 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 01:18:03.114741 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 01:18:03.114748 | orchestrator | ++ export TEMPEST=true 2026-04-17 01:18:03.114756 | orchestrator | ++ TEMPEST=true 2026-04-17 01:18:03.114763 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 01:18:03.114767 | orchestrator | ++ IS_ZUUL=true 2026-04-17 01:18:03.114772 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:18:03.114776 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:18:03.114781 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 01:18:03.114785 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 01:18:03.114790 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 01:18:03.114794 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 01:18:03.114798 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 01:18:03.114802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 01:18:03.114806 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 01:18:03.114828 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 01:18:03.114833 | orchestrator | + source /etc/os-release 2026-04-17 01:18:03.114837 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-17 01:18:03.114841 | orchestrator | ++ NAME=Ubuntu 2026-04-17 01:18:03.114845 | orchestrator | ++ VERSION_ID=24.04 2026-04-17 01:18:03.114849 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-17 01:18:03.114854 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-17 01:18:03.114858 | orchestrator | ++ ID=ubuntu 2026-04-17 01:18:03.114862 | orchestrator | ++ ID_LIKE=debian 2026-04-17 01:18:03.114867 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-17 01:18:03.114871 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-17 01:18:03.114875 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-17 01:18:03.114879 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-17 01:18:03.114884 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-17 01:18:03.114889 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-17 01:18:03.114893 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-17 01:18:03.114908 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-17 01:18:03.114914 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-17 01:18:03.148825 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-17 01:18:26.277357 | orchestrator | 2026-04-17 01:18:26.277439 | orchestrator | # Status of Elasticsearch 2026-04-17 01:18:26.277448 | orchestrator | 2026-04-17 01:18:26.277454 | orchestrator | + pushd /opt/configuration/contrib 2026-04-17 01:18:26.277461 | orchestrator | + echo 2026-04-17 01:18:26.277467 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-17 01:18:26.277472 | orchestrator | + echo 2026-04-17 01:18:26.277478 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-17 01:18:26.443884 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-17 01:18:26.443919 | orchestrator | 2026-04-17 01:18:26.443926 | orchestrator | # Status of MariaDB 2026-04-17 01:18:26.443932 | orchestrator | 2026-04-17 01:18:26.443937 | orchestrator | + echo 2026-04-17 01:18:26.443943 | orchestrator | + echo '# Status of MariaDB' 2026-04-17 01:18:26.443948 | orchestrator | + echo 2026-04-17 01:18:26.444655 | orchestrator | ++ semver latest 10.0.0-0 2026-04-17 01:18:26.503206 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 01:18:26.503278 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 01:18:26.503286 | orchestrator | + osism status database 2026-04-17 01:18:28.087823 | orchestrator | 2026-04-17 01:18:28 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-17 01:18:28.098554 | orchestrator | 2026-04-17 01:18:28 | INFO  | Cluster Status: Primary 2026-04-17 01:18:28.098630 | orchestrator | 2026-04-17 01:18:28 | INFO  | Connected: ON 2026-04-17 01:18:28.098636 | orchestrator | 2026-04-17 01:18:28 | INFO  | Ready: ON 2026-04-17 01:18:28.098641 | orchestrator | 2026-04-17 01:18:28 | INFO  | Cluster Size: 3 2026-04-17 01:18:28.098645 | orchestrator | 2026-04-17 01:18:28 | INFO  | Local State: Synced 2026-04-17 01:18:28.098650 | orchestrator | 2026-04-17 01:18:28 | INFO  | Cluster State UUID: 0f6345b9-39f8-11f1-ae3c-5b3a33b603be 2026-04-17 01:18:28.098656 | orchestrator | 2026-04-17 01:18:28 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-17 01:18:28.098661 | orchestrator | 2026-04-17 01:18:28 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-17 01:18:28.098665 | orchestrator | 2026-04-17 01:18:28 | INFO  | Local Node UUID: 420be769-39f8-11f1-b1a0-cad3fe1f0cac 2026-04-17 01:18:28.098911 | orchestrator | 2026-04-17 01:18:28 | INFO  | Flow Control Paused: 0.00% 2026-04-17 01:18:28.098922 | orchestrator | 2026-04-17 01:18:28 | INFO  | Recv Queue Avg: 0.0120482 2026-04-17 01:18:28.099632 | orchestrator | 2026-04-17 01:18:28 | INFO  | Send Queue Avg: 0.000468311 2026-04-17 01:18:28.099687 | orchestrator | 2026-04-17 01:18:28 | INFO  | Transactions: 4157 local commits, 6351 replicated, 83 received 2026-04-17 01:18:28.099698 | orchestrator | 2026-04-17 01:18:28 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-17 01:18:28.099705 | orchestrator | 2026-04-17 01:18:28 | INFO  | MariaDB Uptime: 22 minutes, 4 seconds 2026-04-17 01:18:28.099713 | orchestrator | 2026-04-17 01:18:28 | INFO  | Threads: 130 connected, 1 running 2026-04-17 01:18:28.099720 | orchestrator | 2026-04-17 01:18:28 | INFO  | Queries: 191095 total, 0 slow 2026-04-17 01:18:28.099727 | orchestrator | 2026-04-17 01:18:28 | INFO  | Aborted Connects: 140 2026-04-17 01:18:28.099733 | orchestrator | 2026-04-17 01:18:28 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-17 01:18:28.306443 | orchestrator | 2026-04-17 01:18:28.306554 | orchestrator | # Status of Prometheus 2026-04-17 01:18:28.306580 | orchestrator | 2026-04-17 01:18:28.306598 | orchestrator | + echo 2026-04-17 01:18:28.306624 | orchestrator | + echo '# Status of Prometheus' 2026-04-17 01:18:28.306636 | orchestrator | + echo 2026-04-17 01:18:28.306647 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-17 01:18:28.368538 | orchestrator | Unauthorized 2026-04-17 01:18:28.371369 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-17 01:18:28.427771 | orchestrator | Unauthorized 2026-04-17 01:18:28.430864 | orchestrator | 2026-04-17 01:18:28.430944 | orchestrator | # Status of RabbitMQ 2026-04-17 01:18:28.430958 | orchestrator | 2026-04-17 01:18:28.430970 | orchestrator | + echo 2026-04-17 01:18:28.430981 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-17 01:18:28.430992 | orchestrator | + echo 2026-04-17 01:18:28.431388 | orchestrator | ++ semver latest 10.0.0-0 2026-04-17 01:18:28.481550 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-17 01:18:28.481640 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 01:18:28.481654 | orchestrator | + osism status messaging 2026-04-17 01:18:35.485551 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-17 01:18:35.547352 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-17 01:18:35.547444 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-17 01:18:35.547457 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-17 01:18:35.547479 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-17 01:18:35.547562 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.548159 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.548257 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-17 01:18:35.548598 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Connections: 205, Channels: 204, Queues: 173 2026-04-17 01:18:35.548622 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Messages: 236 total, 231 ready, 5 unacked 2026-04-17 01:18:35.548632 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Message Rates: 6.6/s publish, 6.6/s deliver 2026-04-17 01:18:35.548796 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Disk Free: 56.2 GB (limit: 0.0 GB) 2026-04-17 01:18:35.549030 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Memory Used: 0.19 GB (limit: 12.54 GB) 2026-04-17 01:18:35.549353 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] File Descriptors: 114/262144 2026-04-17 01:18:35.549596 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-0] Sockets: 68/235840 2026-04-17 01:18:35.549783 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-17 01:18:35.607018 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-17 01:18:35.607336 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-17 01:18:35.607371 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-17 01:18:35.607380 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-17 01:18:35.607450 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.607460 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.607478 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-17 01:18:35.607487 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Connections: 205, Channels: 204, Queues: 173 2026-04-17 01:18:35.607494 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Messages: 236 total, 231 ready, 5 unacked 2026-04-17 01:18:35.607502 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Message Rates: 6.6/s publish, 6.2/s deliver 2026-04-17 01:18:35.607695 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Disk Free: 56.6 GB (limit: 0.0 GB) 2026-04-17 01:18:35.608091 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Memory Used: 0.19 GB (limit: 12.54 GB) 2026-04-17 01:18:35.608253 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] File Descriptors: 110/262144 2026-04-17 01:18:35.608275 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-1] Sockets: 64/235840 2026-04-17 01:18:35.608284 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-17 01:18:35.666868 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-17 01:18:35.666970 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-17 01:18:35.666983 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-17 01:18:35.666993 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-17 01:18:35.667003 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.668047 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-17 01:18:35.668153 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-17 01:18:35.668168 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Connections: 205, Channels: 204, Queues: 173 2026-04-17 01:18:35.668177 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Messages: 236 total, 231 ready, 5 unacked 2026-04-17 01:18:35.668185 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Message Rates: 6.0/s publish, 6.2/s deliver 2026-04-17 01:18:35.668193 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Disk Free: 56.5 GB (limit: 0.0 GB) 2026-04-17 01:18:35.668219 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Memory Used: 0.20 GB (limit: 12.54 GB) 2026-04-17 01:18:35.668245 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] File Descriptors: 119/262144 2026-04-17 01:18:35.668263 | orchestrator | 2026-04-17 01:18:35 | INFO  | [testbed-node-2] Sockets: 73/235840 2026-04-17 01:18:35.668273 | orchestrator | 2026-04-17 01:18:35 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-17 01:18:35.893786 | orchestrator | 2026-04-17 01:18:35.893855 | orchestrator | # Status of Redis 2026-04-17 01:18:35.893862 | orchestrator | 2026-04-17 01:18:35.893867 | orchestrator | + echo 2026-04-17 01:18:35.893873 | orchestrator | + echo '# Status of Redis' 2026-04-17 01:18:35.893878 | orchestrator | + echo 2026-04-17 01:18:35.893884 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-17 01:18:35.901101 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002072s;;;0.000000;10.000000 2026-04-17 01:18:35.901187 | orchestrator | 2026-04-17 01:18:35.901197 | orchestrator | # Create backup of MariaDB database 2026-04-17 01:18:35.901204 | orchestrator | 2026-04-17 01:18:35.901210 | orchestrator | + popd 2026-04-17 01:18:35.901215 | orchestrator | + echo 2026-04-17 01:18:35.901221 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-17 01:18:35.901226 | orchestrator | + echo 2026-04-17 01:18:35.901232 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-17 01:18:37.134182 | orchestrator | 2026-04-17 01:18:37 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-17 01:18:37.191029 | orchestrator | 2026-04-17 01:18:37 | INFO  | Task ce3b1376-0e8f-4438-8e48-e697b3461f41 (mariadb_backup) was prepared for execution. 2026-04-17 01:18:37.191208 | orchestrator | 2026-04-17 01:18:37 | INFO  | It takes a moment until task ce3b1376-0e8f-4438-8e48-e697b3461f41 (mariadb_backup) has been started and output is visible here. 2026-04-17 01:19:02.879690 | orchestrator | 2026-04-17 01:19:02.879828 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-17 01:19:02.879847 | orchestrator | 2026-04-17 01:19:02.879868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-17 01:19:02.879887 | orchestrator | Friday 17 April 2026 01:18:40 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-04-17 01:19:02.879911 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:19:02.879939 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:19:02.879958 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:19:02.879977 | orchestrator | 2026-04-17 01:19:02.879994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-17 01:19:02.880013 | orchestrator | Friday 17 April 2026 01:18:40 +0000 (0:00:00.318) 0:00:00.558 ********** 2026-04-17 01:19:02.880033 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-17 01:19:02.880053 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-17 01:19:02.880073 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-17 01:19:02.880092 | orchestrator | 2026-04-17 01:19:02.880222 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-17 01:19:02.880238 | orchestrator | 2026-04-17 01:19:02.880252 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-17 01:19:02.880264 | orchestrator | Friday 17 April 2026 01:18:41 +0000 (0:00:00.432) 0:00:00.990 ********** 2026-04-17 01:19:02.880277 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-17 01:19:02.880291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-17 01:19:02.880304 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-17 01:19:02.880317 | orchestrator | 2026-04-17 01:19:02.880328 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-17 01:19:02.880340 | orchestrator | Friday 17 April 2026 01:18:41 +0000 (0:00:00.389) 0:00:01.380 ********** 2026-04-17 01:19:02.880351 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-17 01:19:02.880396 | orchestrator | 2026-04-17 01:19:02.880407 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-17 01:19:02.880418 | orchestrator | Friday 17 April 2026 01:18:42 +0000 (0:00:00.633) 0:00:02.013 ********** 2026-04-17 01:19:02.880429 | orchestrator | ok: [testbed-node-0] 2026-04-17 01:19:02.880440 | orchestrator | ok: [testbed-node-1] 2026-04-17 01:19:02.880451 | orchestrator | ok: [testbed-node-2] 2026-04-17 01:19:02.880462 | orchestrator | 2026-04-17 01:19:02.880473 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-17 01:19:02.880485 | orchestrator | Friday 17 April 2026 01:18:45 +0000 (0:00:03.179) 0:00:05.192 ********** 2026-04-17 01:19:02.880496 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:19:02.880508 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:19:02.880519 | orchestrator | changed: [testbed-node-0] 2026-04-17 01:19:02.880530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-17 01:19:02.880546 | orchestrator | 2026-04-17 01:19:02.880565 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-17 01:19:02.880593 | orchestrator | skipping: no hosts matched 2026-04-17 01:19:02.880613 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-17 01:19:02.880630 | orchestrator | 2026-04-17 01:19:02.880648 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-17 01:19:02.880666 | orchestrator | skipping: no hosts matched 2026-04-17 01:19:02.880683 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-17 01:19:02.880700 | orchestrator | mariadb_bootstrap_restart 2026-04-17 01:19:02.880719 | orchestrator | 2026-04-17 01:19:02.880738 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-17 01:19:02.880756 | orchestrator | skipping: no hosts matched 2026-04-17 01:19:02.880773 | orchestrator | 2026-04-17 01:19:02.880791 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-17 01:19:02.880807 | orchestrator | 2026-04-17 01:19:02.880824 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-17 01:19:02.880843 | orchestrator | Friday 17 April 2026 01:19:02 +0000 (0:00:16.849) 0:00:22.041 ********** 2026-04-17 01:19:02.880862 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:19:02.880881 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:19:02.880900 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:19:02.880917 | orchestrator | 2026-04-17 01:19:02.880936 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-17 01:19:02.880954 | orchestrator | Friday 17 April 2026 01:19:02 +0000 (0:00:00.280) 0:00:22.322 ********** 2026-04-17 01:19:02.880973 | orchestrator | skipping: [testbed-node-0] 2026-04-17 01:19:02.880993 | orchestrator | skipping: [testbed-node-1] 2026-04-17 01:19:02.881012 | orchestrator | skipping: [testbed-node-2] 2026-04-17 01:19:02.881030 | orchestrator | 2026-04-17 01:19:02.881049 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:19:02.881070 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-17 01:19:02.881252 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 01:19:02.881284 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 01:19:02.881301 | orchestrator | 2026-04-17 01:19:02.881320 | orchestrator | 2026-04-17 01:19:02.881338 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:19:02.881357 | orchestrator | Friday 17 April 2026 01:19:02 +0000 (0:00:00.211) 0:00:22.534 ********** 2026-04-17 01:19:02.881378 | orchestrator | =============================================================================== 2026-04-17 01:19:02.881397 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 16.85s 2026-04-17 01:19:02.881471 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.18s 2026-04-17 01:19:02.881494 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.63s 2026-04-17 01:19:02.881513 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-17 01:19:02.881530 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2026-04-17 01:19:02.881541 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-17 01:19:02.881552 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2026-04-17 01:19:02.881564 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-04-17 01:19:03.052768 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-17 01:19:03.058179 | orchestrator | + set -e 2026-04-17 01:19:03.058260 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-17 01:19:03.058273 | orchestrator | ++ export INTERACTIVE=false 2026-04-17 01:19:03.058282 | orchestrator | ++ INTERACTIVE=false 2026-04-17 01:19:03.058290 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-17 01:19:03.058298 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-17 01:19:03.058307 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-17 01:19:03.059595 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-17 01:19:03.062267 | orchestrator | 2026-04-17 01:19:03.062308 | orchestrator | # OpenStack endpoints 2026-04-17 01:19:03.062317 | orchestrator | 2026-04-17 01:19:03.062324 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:19:03.062330 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:19:03.062335 | orchestrator | + export OS_CLOUD=admin 2026-04-17 01:19:03.062341 | orchestrator | + OS_CLOUD=admin 2026-04-17 01:19:03.062346 | orchestrator | + echo 2026-04-17 01:19:03.062352 | orchestrator | + echo '# OpenStack endpoints' 2026-04-17 01:19:03.062357 | orchestrator | + echo 2026-04-17 01:19:03.062362 | orchestrator | + openstack endpoint list 2026-04-17 01:19:06.350951 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 01:19:06.351078 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-17 01:19:06.351094 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 01:19:06.351128 | orchestrator | | 11dcb002fdc84996bbb949a9a36b6252 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-17 01:19:06.351140 | orchestrator | | 157968e66b4046ddb7ff9cb12dd16fab | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 01:19:06.351151 | orchestrator | | 17ff98a30b3347ebac7d65d106c728b5 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-17 01:19:06.351162 | orchestrator | | 1b44b2a03e7a44edadd914c98c46687f | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-17 01:19:06.351174 | orchestrator | | 21e6cda677dd4fafbe47d63c0ddd5dd8 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 01:19:06.351186 | orchestrator | | 2be610d51b724c55a9a1c53c308e699e | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-17 01:19:06.351216 | orchestrator | | 3e34fa1dc49a4ec48e19002bb9f20db9 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-17 01:19:06.351227 | orchestrator | | 433809f1f4254c20993f334511d8bb3b | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-17 01:19:06.351271 | orchestrator | | 62b1bf41abd4483bb58fd5d9ce1d1609 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-17 01:19:06.351283 | orchestrator | | 6322e373ab424541bf9a625e26c5c6c5 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-17 01:19:06.351295 | orchestrator | | 6ef8e8b5c5794532b1f6317dea5df46c | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-17 01:19:06.351308 | orchestrator | | 7273d5b73c28469e9b9809e337ff82be | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-17 01:19:06.351320 | orchestrator | | 7788be6d12004e95a42c255c1612543b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-17 01:19:06.351332 | orchestrator | | 7b0787e50a2647c0a598d9253fa604e7 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-17 01:19:06.351344 | orchestrator | | 7c361c41ca6d4712ba23a6dfd157c703 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-17 01:19:06.351355 | orchestrator | | 8b7fe93aa88f47408bf0e2341800d954 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-17 01:19:06.351388 | orchestrator | | 98acba526d4244edb3838da8cde3b4b8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-17 01:19:06.351400 | orchestrator | | 9da8c6ee8d20436b858d8eb6fdfda03a | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-17 01:19:06.351411 | orchestrator | | a4e11b68839b454bb37f081308688c96 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-17 01:19:06.351423 | orchestrator | | a6e4cf95be124804b45948bf7617231f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-17 01:19:06.351454 | orchestrator | | a9f5f3426e294e06bd6035739df0e2e7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-17 01:19:06.351465 | orchestrator | | d22e474b9f16421082074ae93cea2c8b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-17 01:19:06.351477 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-17 01:19:06.573651 | orchestrator | 2026-04-17 01:19:06.573751 | orchestrator | # Cinder 2026-04-17 01:19:06.573764 | orchestrator | 2026-04-17 01:19:06.573772 | orchestrator | + echo 2026-04-17 01:19:06.573781 | orchestrator | + echo '# Cinder' 2026-04-17 01:19:06.573789 | orchestrator | + echo 2026-04-17 01:19:06.573797 | orchestrator | + openstack volume service list 2026-04-17 01:19:10.142445 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:10.142549 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 01:19:10.142562 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:10.142571 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T01:19:02.000000 | 2026-04-17 01:19:10.142579 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T01:19:03.000000 | 2026-04-17 01:19:10.142612 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T01:19:03.000000 | 2026-04-17 01:19:10.142622 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-17T01:19:02.000000 | 2026-04-17 01:19:10.142631 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-17T01:19:07.000000 | 2026-04-17 01:19:10.142639 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-17T01:19:08.000000 | 2026-04-17 01:19:10.142647 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-17T01:19:01.000000 | 2026-04-17 01:19:10.142655 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-17T01:19:03.000000 | 2026-04-17 01:19:10.142677 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-17T01:19:04.000000 | 2026-04-17 01:19:10.142686 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:10.384666 | orchestrator | 2026-04-17 01:19:10.384737 | orchestrator | # Neutron 2026-04-17 01:19:10.384744 | orchestrator | 2026-04-17 01:19:10.384748 | orchestrator | + echo 2026-04-17 01:19:10.384752 | orchestrator | + echo '# Neutron' 2026-04-17 01:19:10.384758 | orchestrator | + echo 2026-04-17 01:19:10.384762 | orchestrator | + openstack network agent list 2026-04-17 01:19:13.612507 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 01:19:13.612603 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-17 01:19:13.612614 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 01:19:13.612621 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612628 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612634 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612641 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612647 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612653 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-17 01:19:13.612659 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 01:19:13.612665 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 01:19:13.612672 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-17 01:19:13.612678 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-17 01:19:13.857403 | orchestrator | + openstack network service provider list 2026-04-17 01:19:16.330883 | orchestrator | +---------------+------+---------+ 2026-04-17 01:19:16.330964 | orchestrator | | Service Type | Name | Default | 2026-04-17 01:19:16.330970 | orchestrator | +---------------+------+---------+ 2026-04-17 01:19:16.330974 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-17 01:19:16.330998 | orchestrator | +---------------+------+---------+ 2026-04-17 01:19:16.567590 | orchestrator | + echo 2026-04-17 01:19:16.567692 | orchestrator | 2026-04-17 01:19:16.567703 | orchestrator | # Nova 2026-04-17 01:19:16.567709 | orchestrator | 2026-04-17 01:19:16.567715 | orchestrator | + echo '# Nova' 2026-04-17 01:19:16.567720 | orchestrator | + echo 2026-04-17 01:19:16.567726 | orchestrator | + openstack compute service list 2026-04-17 01:19:19.279412 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:19.279486 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-17 01:19:19.279492 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:19.279497 | orchestrator | | 4b9944e8-f875-4006-bc26-e7c059a82dcd | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-17T01:19:09.000000 | 2026-04-17 01:19:19.279501 | orchestrator | | a6bca195-057e-4124-a1f0-fe71d76a1dda | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-17T01:19:09.000000 | 2026-04-17 01:19:19.279505 | orchestrator | | 296186fb-ffc7-4c38-bd9d-6c7936aaa340 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-17T01:19:10.000000 | 2026-04-17 01:19:19.279509 | orchestrator | | fc12b7ae-1863-4a94-ac3e-7b2348f2a6f0 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-17T01:19:11.000000 | 2026-04-17 01:19:19.279513 | orchestrator | | 019deb5f-a9c9-413d-9908-634d253bc889 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-17T01:19:12.000000 | 2026-04-17 01:19:19.279517 | orchestrator | | dd93fec8-1423-4462-97e7-d123e5720643 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-17T01:19:15.000000 | 2026-04-17 01:19:19.279521 | orchestrator | | c39c5b8b-88a5-4895-81b3-6762c3612798 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-17T01:19:15.000000 | 2026-04-17 01:19:19.279524 | orchestrator | | bca22c17-d743-405f-b242-670f63b6efac | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-17T01:19:16.000000 | 2026-04-17 01:19:19.279541 | orchestrator | | 2f850eb9-c49d-4f6c-b43d-b57dfb02a22a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-17T01:19:16.000000 | 2026-04-17 01:19:19.279545 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-17 01:19:19.512427 | orchestrator | + openstack hypervisor list 2026-04-17 01:19:22.143324 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 01:19:22.143421 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-17 01:19:22.143432 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 01:19:22.143439 | orchestrator | | 24eadbcb-d4a5-4e5a-b774-c5244ac9b6ab | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-17 01:19:22.143445 | orchestrator | | b0ff2ade-bd00-4f01-942b-ab054a1b45b7 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-17 01:19:22.143452 | orchestrator | | a5e8ece6-3863-4b20-a4ee-57d636fb49fd | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-17 01:19:22.143459 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-17 01:19:22.371731 | orchestrator | 2026-04-17 01:19:22.371839 | orchestrator | # Run OpenStack test play 2026-04-17 01:19:22.371852 | orchestrator | 2026-04-17 01:19:22.371873 | orchestrator | + echo 2026-04-17 01:19:22.371890 | orchestrator | + echo '# Run OpenStack test play' 2026-04-17 01:19:22.371899 | orchestrator | + echo 2026-04-17 01:19:22.371907 | orchestrator | + osism apply --environment openstack test 2026-04-17 01:19:23.623188 | orchestrator | 2026-04-17 01:19:23 | INFO  | Trying to run play test in environment openstack 2026-04-17 01:19:23.648683 | orchestrator | 2026-04-17 01:19:23 | INFO  | Prepare task for execution of test. 2026-04-17 01:19:23.710269 | orchestrator | 2026-04-17 01:19:23 | INFO  | Task 251133d5-b016-43f0-b2ea-bfdb233f1c53 (test) was prepared for execution. 2026-04-17 01:19:23.710420 | orchestrator | 2026-04-17 01:19:23 | INFO  | It takes a moment until task 251133d5-b016-43f0-b2ea-bfdb233f1c53 (test) has been started and output is visible here. 2026-04-17 01:22:38.333497 | orchestrator | 2026-04-17 01:22:38.333614 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-17 01:22:38.333632 | orchestrator | 2026-04-17 01:22:38.333637 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-17 01:22:38.333642 | orchestrator | Friday 17 April 2026 01:19:26 +0000 (0:00:00.097) 0:00:00.097 ********** 2026-04-17 01:22:38.333647 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333652 | orchestrator | 2026-04-17 01:22:38.333659 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-17 01:22:38.333664 | orchestrator | Friday 17 April 2026 01:19:30 +0000 (0:00:03.773) 0:00:03.871 ********** 2026-04-17 01:22:38.333668 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333672 | orchestrator | 2026-04-17 01:22:38.333676 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-17 01:22:38.333679 | orchestrator | Friday 17 April 2026 01:19:34 +0000 (0:00:04.404) 0:00:08.275 ********** 2026-04-17 01:22:38.333684 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333689 | orchestrator | 2026-04-17 01:22:38.333695 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-17 01:22:38.333701 | orchestrator | Friday 17 April 2026 01:19:41 +0000 (0:00:06.216) 0:00:14.492 ********** 2026-04-17 01:22:38.333707 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333714 | orchestrator | 2026-04-17 01:22:38.333721 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-17 01:22:38.333727 | orchestrator | Friday 17 April 2026 01:19:45 +0000 (0:00:03.986) 0:00:18.478 ********** 2026-04-17 01:22:38.333733 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333739 | orchestrator | 2026-04-17 01:22:38.333745 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-17 01:22:38.333751 | orchestrator | Friday 17 April 2026 01:19:49 +0000 (0:00:04.274) 0:00:22.753 ********** 2026-04-17 01:22:38.333757 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-17 01:22:38.333763 | orchestrator | changed: [localhost] => (item=member) 2026-04-17 01:22:38.333771 | orchestrator | changed: [localhost] => (item=creator) 2026-04-17 01:22:38.333776 | orchestrator | 2026-04-17 01:22:38.333782 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-17 01:22:38.333788 | orchestrator | Friday 17 April 2026 01:20:01 +0000 (0:00:11.985) 0:00:34.738 ********** 2026-04-17 01:22:38.333794 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333799 | orchestrator | 2026-04-17 01:22:38.333805 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-17 01:22:38.333811 | orchestrator | Friday 17 April 2026 01:20:05 +0000 (0:00:04.292) 0:00:39.031 ********** 2026-04-17 01:22:38.333816 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333821 | orchestrator | 2026-04-17 01:22:38.333827 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-17 01:22:38.333832 | orchestrator | Friday 17 April 2026 01:20:10 +0000 (0:00:04.953) 0:00:43.985 ********** 2026-04-17 01:22:38.333838 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333845 | orchestrator | 2026-04-17 01:22:38.333852 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-17 01:22:38.333859 | orchestrator | Friday 17 April 2026 01:20:15 +0000 (0:00:04.347) 0:00:48.332 ********** 2026-04-17 01:22:38.333866 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333872 | orchestrator | 2026-04-17 01:22:38.333877 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-17 01:22:38.333883 | orchestrator | Friday 17 April 2026 01:20:18 +0000 (0:00:03.834) 0:00:52.167 ********** 2026-04-17 01:22:38.333888 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333893 | orchestrator | 2026-04-17 01:22:38.333899 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-17 01:22:38.333927 | orchestrator | Friday 17 April 2026 01:20:23 +0000 (0:00:04.258) 0:00:56.425 ********** 2026-04-17 01:22:38.333935 | orchestrator | changed: [localhost] 2026-04-17 01:22:38.333941 | orchestrator | 2026-04-17 01:22:38.333947 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-17 01:22:38.333953 | orchestrator | Friday 17 April 2026 01:20:27 +0000 (0:00:04.009) 0:01:00.435 ********** 2026-04-17 01:22:38.333959 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-17 01:22:38.333965 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-17 01:22:38.333971 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-17 01:22:38.333977 | orchestrator | 2026-04-17 01:22:38.333984 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-17 01:22:38.333991 | orchestrator | Friday 17 April 2026 01:20:41 +0000 (0:00:14.344) 0:01:14.780 ********** 2026-04-17 01:22:38.333998 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-17 01:22:38.334063 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-17 01:22:38.334070 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-17 01:22:38.334075 | orchestrator | 2026-04-17 01:22:38.334079 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-17 01:22:38.334084 | orchestrator | Friday 17 April 2026 01:20:57 +0000 (0:00:16.290) 0:01:31.070 ********** 2026-04-17 01:22:38.334089 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-17 01:22:38.334094 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-17 01:22:38.334099 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-17 01:22:38.334103 | orchestrator | 2026-04-17 01:22:38.334108 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-17 01:22:38.334113 | orchestrator | 2026-04-17 01:22:38.334117 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-17 01:22:38.334137 | orchestrator | Friday 17 April 2026 01:21:31 +0000 (0:00:33.347) 0:02:04.418 ********** 2026-04-17 01:22:38.334142 | orchestrator | ok: [localhost] 2026-04-17 01:22:38.334147 | orchestrator | 2026-04-17 01:22:38.334152 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-17 01:22:38.334157 | orchestrator | Friday 17 April 2026 01:21:34 +0000 (0:00:03.717) 0:02:08.136 ********** 2026-04-17 01:22:38.334161 | orchestrator | skipping: [localhost] 2026-04-17 01:22:38.334166 | orchestrator | 2026-04-17 01:22:38.334170 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-17 01:22:38.334175 | orchestrator | Friday 17 April 2026 01:21:34 +0000 (0:00:00.037) 0:02:08.173 ********** 2026-04-17 01:22:38.334179 | orchestrator | skipping: [localhost] 2026-04-17 01:22:38.334183 | orchestrator | 2026-04-17 01:22:38.334189 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-17 01:22:38.334195 | orchestrator | Friday 17 April 2026 01:21:34 +0000 (0:00:00.040) 0:02:08.213 ********** 2026-04-17 01:22:38.334202 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-17 01:22:38.334208 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-17 01:22:38.334214 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-17 01:22:38.334219 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-17 01:22:38.334242 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-17 01:22:38.334249 | orchestrator | skipping: [localhost] 2026-04-17 01:22:38.334255 | orchestrator | 2026-04-17 01:22:38.334262 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-17 01:22:38.334276 | orchestrator | Friday 17 April 2026 01:21:35 +0000 (0:00:00.153) 0:02:08.367 ********** 2026-04-17 01:22:38.334282 | orchestrator | skipping: [localhost] 2026-04-17 01:22:38.334288 | orchestrator | 2026-04-17 01:22:38.334295 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-17 01:22:38.334302 | orchestrator | Friday 17 April 2026 01:21:35 +0000 (0:00:00.147) 0:02:08.514 ********** 2026-04-17 01:22:38.334309 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 01:22:38.334314 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 01:22:38.334319 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 01:22:38.334325 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 01:22:38.334331 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 01:22:38.334337 | orchestrator | 2026-04-17 01:22:38.334344 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-17 01:22:38.334350 | orchestrator | Friday 17 April 2026 01:21:39 +0000 (0:00:04.515) 0:02:13.030 ********** 2026-04-17 01:22:38.334356 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-17 01:22:38.334363 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-17 01:22:38.334369 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-17 01:22:38.334376 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-17 01:22:38.334389 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j876717898459.2768', 'results_file': '/ansible/.ansible_async/j876717898459.2768', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:22:38.334398 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j341288926601.2793', 'results_file': '/ansible/.ansible_async/j341288926601.2793', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:22:38.334404 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j859265095625.2818', 'results_file': '/ansible/.ansible_async/j859265095625.2818', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:22:38.334410 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-17 01:22:38.334416 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j226484291381.2843', 'results_file': '/ansible/.ansible_async/j226484291381.2843', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:22:38.334421 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j263514862697.2868', 'results_file': '/ansible/.ansible_async/j263514862697.2868', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:22:38.334427 | orchestrator | 2026-04-17 01:22:38.334433 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-17 01:22:38.334439 | orchestrator | Friday 17 April 2026 01:22:37 +0000 (0:00:57.617) 0:03:10.648 ********** 2026-04-17 01:22:38.334451 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 01:23:49.687003 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 01:23:49.687081 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 01:23:49.687087 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 01:23:49.687105 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 01:23:49.687109 | orchestrator | 2026-04-17 01:23:49.687115 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-17 01:23:49.687119 | orchestrator | Friday 17 April 2026 01:22:41 +0000 (0:00:04.572) 0:03:15.220 ********** 2026-04-17 01:23:49.687123 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-17 01:23:49.687129 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j616380686921.2979', 'results_file': '/ansible/.ansible_async/j616380686921.2979', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687135 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j656964105746.3004', 'results_file': '/ansible/.ansible_async/j656964105746.3004', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687139 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j959264288766.3029', 'results_file': '/ansible/.ansible_async/j959264288766.3029', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687143 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j977333464241.3054', 'results_file': '/ansible/.ansible_async/j977333464241.3054', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687147 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j856752816697.3079', 'results_file': '/ansible/.ansible_async/j856752816697.3079', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687151 | orchestrator | 2026-04-17 01:23:49.687154 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-17 01:23:49.687158 | orchestrator | Friday 17 April 2026 01:22:51 +0000 (0:00:09.282) 0:03:24.503 ********** 2026-04-17 01:23:49.687162 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 01:23:49.687166 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 01:23:49.687169 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 01:23:49.687182 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 01:23:49.687186 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 01:23:49.687189 | orchestrator | 2026-04-17 01:23:49.687193 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-17 01:23:49.687197 | orchestrator | Friday 17 April 2026 01:22:55 +0000 (0:00:04.399) 0:03:28.903 ********** 2026-04-17 01:23:49.687201 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-17 01:23:49.687205 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j393546032151.3155', 'results_file': '/ansible/.ansible_async/j393546032151.3155', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687209 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j791422738840.3180', 'results_file': '/ansible/.ansible_async/j791422738840.3180', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687213 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j161966032193.3206', 'results_file': '/ansible/.ansible_async/j161966032193.3206', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687220 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j219062453490.3232', 'results_file': '/ansible/.ansible_async/j219062453490.3232', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687233 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j867084741828.3258', 'results_file': '/ansible/.ansible_async/j867084741828.3258', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-17 01:23:49.687237 | orchestrator | 2026-04-17 01:23:49.687241 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-17 01:23:49.687245 | orchestrator | Friday 17 April 2026 01:23:05 +0000 (0:00:09.398) 0:03:38.302 ********** 2026-04-17 01:23:49.687249 | orchestrator | changed: [localhost] 2026-04-17 01:23:49.687253 | orchestrator | 2026-04-17 01:23:49.687257 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-17 01:23:49.687261 | orchestrator | Friday 17 April 2026 01:23:11 +0000 (0:00:06.450) 0:03:44.752 ********** 2026-04-17 01:23:49.687264 | orchestrator | changed: [localhost] 2026-04-17 01:23:49.687268 | orchestrator | 2026-04-17 01:23:49.687272 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-17 01:23:49.687276 | orchestrator | Friday 17 April 2026 01:23:25 +0000 (0:00:13.562) 0:03:58.314 ********** 2026-04-17 01:23:49.687280 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-17 01:23:49.687284 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-17 01:23:49.687288 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-17 01:23:49.687292 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-17 01:23:49.687296 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-17 01:23:49.687299 | orchestrator | 2026-04-17 01:23:49.687306 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-17 01:23:49.687312 | orchestrator | Friday 17 April 2026 01:23:49 +0000 (0:00:24.372) 0:04:22.686 ********** 2026-04-17 01:23:49.687317 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-17 01:23:49.687323 | orchestrator |  "msg": "test: 192.168.112.109" 2026-04-17 01:23:49.687329 | orchestrator | } 2026-04-17 01:23:49.687335 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-17 01:23:49.687342 | orchestrator |  "msg": "test-1: 192.168.112.131" 2026-04-17 01:23:49.687347 | orchestrator | } 2026-04-17 01:23:49.687353 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-17 01:23:49.687359 | orchestrator |  "msg": "test-2: 192.168.112.187" 2026-04-17 01:23:49.687365 | orchestrator | } 2026-04-17 01:23:49.687371 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-17 01:23:49.687387 | orchestrator |  "msg": "test-3: 192.168.112.100" 2026-04-17 01:23:49.687401 | orchestrator | } 2026-04-17 01:23:49.687405 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-17 01:23:49.687409 | orchestrator |  "msg": "test-4: 192.168.112.183" 2026-04-17 01:23:49.687413 | orchestrator | } 2026-04-17 01:23:49.687417 | orchestrator | 2026-04-17 01:23:49.687421 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:23:49.687425 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-17 01:23:49.687430 | orchestrator | 2026-04-17 01:23:49.687434 | orchestrator | 2026-04-17 01:23:49.687438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:23:49.687442 | orchestrator | Friday 17 April 2026 01:23:49 +0000 (0:00:00.106) 0:04:22.793 ********** 2026-04-17 01:23:49.687445 | orchestrator | =============================================================================== 2026-04-17 01:23:49.687456 | orchestrator | Wait for instance creation to complete --------------------------------- 57.62s 2026-04-17 01:23:49.687460 | orchestrator | Create test routers ---------------------------------------------------- 33.35s 2026-04-17 01:23:49.687467 | orchestrator | Create floating ip addresses ------------------------------------------- 24.37s 2026-04-17 01:23:49.687471 | orchestrator | Create test subnets ---------------------------------------------------- 16.29s 2026-04-17 01:23:49.687475 | orchestrator | Create test networks --------------------------------------------------- 14.35s 2026-04-17 01:23:49.687479 | orchestrator | Attach test volume ----------------------------------------------------- 13.56s 2026-04-17 01:23:49.687482 | orchestrator | Add member roles to user test ------------------------------------------ 11.99s 2026-04-17 01:23:49.687486 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.40s 2026-04-17 01:23:49.687490 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.28s 2026-04-17 01:23:49.687494 | orchestrator | Create test volume ------------------------------------------------------ 6.45s 2026-04-17 01:23:49.687497 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.22s 2026-04-17 01:23:49.687501 | orchestrator | Create ssh security group ----------------------------------------------- 4.95s 2026-04-17 01:23:49.687505 | orchestrator | Add metadata to instances ----------------------------------------------- 4.57s 2026-04-17 01:23:49.687509 | orchestrator | Create test instances --------------------------------------------------- 4.52s 2026-04-17 01:23:49.687512 | orchestrator | Create test-admin user -------------------------------------------------- 4.40s 2026-04-17 01:23:49.687516 | orchestrator | Add tag to instances ---------------------------------------------------- 4.40s 2026-04-17 01:23:49.687520 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.35s 2026-04-17 01:23:49.687524 | orchestrator | Create test server group ------------------------------------------------ 4.29s 2026-04-17 01:23:49.687528 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2026-04-17 01:23:49.687531 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.26s 2026-04-17 01:23:49.836103 | orchestrator | + server_list 2026-04-17 01:23:49.836187 | orchestrator | + openstack --os-cloud test server list 2026-04-17 01:23:53.540961 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 01:23:53.541073 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-17 01:23:53.541083 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 01:23:53.541091 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | test-2=192.168.112.100, 192.168.201.141 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 01:23:53.541099 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | test-3=192.168.112.183, 192.168.202.101 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 01:23:53.541106 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | test-2=192.168.112.187, 192.168.201.150 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 01:23:53.541113 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | test-1=192.168.112.131, 192.168.200.142 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 01:23:53.541120 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | test-1=192.168.112.109, 192.168.200.55 | N/A (booted from volume) | SCS-1L-1 | 2026-04-17 01:23:53.541127 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-17 01:23:53.700252 | orchestrator | + openstack --os-cloud test server show test 2026-04-17 01:23:56.757492 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:56.757612 | orchestrator | | Field | Value | 2026-04-17 01:23:56.757624 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:56.757636 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 01:23:56.757644 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 01:23:56.757651 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 01:23:56.757658 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-17 01:23:56.757665 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 01:23:56.757672 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 01:23:56.757691 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 01:23:56.757708 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 01:23:56.757718 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 01:23:56.757728 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 01:23:56.757740 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 01:23:56.757750 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 01:23:56.757761 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 01:23:56.757770 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 01:23:56.757781 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 01:23:56.757790 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T01:22:11.000000 | 2026-04-17 01:23:56.757820 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 01:23:56.757832 | orchestrator | | accessIPv4 | | 2026-04-17 01:23:56.757843 | orchestrator | | accessIPv6 | | 2026-04-17 01:23:56.757853 | orchestrator | | addresses | test-1=192.168.112.109, 192.168.200.55 | 2026-04-17 01:23:56.757867 | orchestrator | | config_drive | | 2026-04-17 01:23:56.757875 | orchestrator | | created | 2026-04-17T01:21:44Z | 2026-04-17 01:23:56.757882 | orchestrator | | description | None | 2026-04-17 01:23:56.757889 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 01:23:56.757896 | orchestrator | | hostId | 8f8e10ac9ef46214fa9a99503f4301bf3926559424e8fc5c6f69592f | 2026-04-17 01:23:56.757903 | orchestrator | | host_status | None | 2026-04-17 01:23:56.757920 | orchestrator | | id | f238b33a-4185-4bb2-b90b-fa645cbc738c | 2026-04-17 01:23:56.757927 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 01:23:56.757934 | orchestrator | | key_name | test | 2026-04-17 01:23:56.757944 | orchestrator | | locked | False | 2026-04-17 01:23:56.757951 | orchestrator | | locked_reason | None | 2026-04-17 01:23:56.757958 | orchestrator | | name | test | 2026-04-17 01:23:56.757965 | orchestrator | | pinned_availability_zone | None | 2026-04-17 01:23:56.757998 | orchestrator | | progress | 0 | 2026-04-17 01:23:56.758009 | orchestrator | | project_id | faf274d8425f4731be98c89e25352c9a | 2026-04-17 01:23:56.758083 | orchestrator | | properties | hostname='test' | 2026-04-17 01:23:56.758108 | orchestrator | | security_groups | name='ssh' | 2026-04-17 01:23:56.758120 | orchestrator | | | name='icmp' | 2026-04-17 01:23:56.758131 | orchestrator | | server_groups | None | 2026-04-17 01:23:56.758146 | orchestrator | | status | ACTIVE | 2026-04-17 01:23:56.758157 | orchestrator | | tags | test | 2026-04-17 01:23:56.758167 | orchestrator | | trusted_image_certificates | None | 2026-04-17 01:23:56.758179 | orchestrator | | updated | 2026-04-17T01:22:43Z | 2026-04-17 01:23:56.758189 | orchestrator | | user_id | ebc3632c5eee468b8b85becc7b1095c3 | 2026-04-17 01:23:56.758210 | orchestrator | | volumes_attached | delete_on_termination='True', id='c2caf376-4b85-424c-a796-8fe4933a9593' | 2026-04-17 01:23:56.758220 | orchestrator | | | delete_on_termination='False', id='a6934a49-b515-4f70-bb8e-24e968794e6b' | 2026-04-17 01:23:56.759635 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:56.925344 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-17 01:23:59.539236 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:59.539333 | orchestrator | | Field | Value | 2026-04-17 01:23:59.539353 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:59.539358 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 01:23:59.539362 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 01:23:59.539366 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 01:23:59.539382 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-17 01:23:59.539387 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 01:23:59.539391 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 01:23:59.539406 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 01:23:59.539411 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 01:23:59.539415 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 01:23:59.539437 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 01:23:59.539441 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 01:23:59.539445 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 01:23:59.539450 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 01:23:59.539457 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 01:23:59.539461 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 01:23:59.539465 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T01:22:11.000000 | 2026-04-17 01:23:59.539474 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 01:23:59.539478 | orchestrator | | accessIPv4 | | 2026-04-17 01:23:59.539481 | orchestrator | | accessIPv6 | | 2026-04-17 01:23:59.539486 | orchestrator | | addresses | test-1=192.168.112.131, 192.168.200.142 | 2026-04-17 01:23:59.539490 | orchestrator | | config_drive | | 2026-04-17 01:23:59.539494 | orchestrator | | created | 2026-04-17T01:21:45Z | 2026-04-17 01:23:59.539506 | orchestrator | | description | None | 2026-04-17 01:23:59.539510 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 01:23:59.539514 | orchestrator | | hostId | 8f8e10ac9ef46214fa9a99503f4301bf3926559424e8fc5c6f69592f | 2026-04-17 01:23:59.539518 | orchestrator | | host_status | None | 2026-04-17 01:23:59.539527 | orchestrator | | id | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | 2026-04-17 01:23:59.539531 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 01:23:59.539535 | orchestrator | | key_name | test | 2026-04-17 01:23:59.539541 | orchestrator | | locked | False | 2026-04-17 01:23:59.539546 | orchestrator | | locked_reason | None | 2026-04-17 01:23:59.539554 | orchestrator | | name | test-1 | 2026-04-17 01:23:59.539558 | orchestrator | | pinned_availability_zone | None | 2026-04-17 01:23:59.539561 | orchestrator | | progress | 0 | 2026-04-17 01:23:59.539565 | orchestrator | | project_id | faf274d8425f4731be98c89e25352c9a | 2026-04-17 01:23:59.539569 | orchestrator | | properties | hostname='test-1' | 2026-04-17 01:23:59.539578 | orchestrator | | security_groups | name='ssh' | 2026-04-17 01:23:59.539582 | orchestrator | | | name='icmp' | 2026-04-17 01:23:59.539586 | orchestrator | | server_groups | None | 2026-04-17 01:23:59.539593 | orchestrator | | status | ACTIVE | 2026-04-17 01:23:59.539601 | orchestrator | | tags | test | 2026-04-17 01:23:59.539605 | orchestrator | | trusted_image_certificates | None | 2026-04-17 01:23:59.539609 | orchestrator | | updated | 2026-04-17T01:22:44Z | 2026-04-17 01:23:59.539613 | orchestrator | | user_id | ebc3632c5eee468b8b85becc7b1095c3 | 2026-04-17 01:23:59.539617 | orchestrator | | volumes_attached | delete_on_termination='True', id='5d35e91f-b14d-426a-988c-bfd1bcc023d1' | 2026-04-17 01:23:59.544155 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:23:59.788895 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-17 01:24:02.900760 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:02.900849 | orchestrator | | Field | Value | 2026-04-17 01:24:02.900867 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:02.900892 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 01:24:02.900899 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 01:24:02.900906 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 01:24:02.900911 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-17 01:24:02.900918 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 01:24:02.900924 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 01:24:02.900944 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 01:24:02.900951 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 01:24:02.900957 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 01:24:02.900990 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 01:24:02.901003 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 01:24:02.901009 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 01:24:02.901016 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 01:24:02.901022 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 01:24:02.901028 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 01:24:02.901034 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T01:22:11.000000 | 2026-04-17 01:24:02.901047 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 01:24:02.901054 | orchestrator | | accessIPv4 | | 2026-04-17 01:24:02.901060 | orchestrator | | accessIPv6 | | 2026-04-17 01:24:02.901074 | orchestrator | | addresses | test-2=192.168.112.187, 192.168.201.150 | 2026-04-17 01:24:02.901081 | orchestrator | | config_drive | | 2026-04-17 01:24:02.901088 | orchestrator | | created | 2026-04-17T01:21:45Z | 2026-04-17 01:24:02.901094 | orchestrator | | description | None | 2026-04-17 01:24:02.901100 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 01:24:02.901106 | orchestrator | | hostId | 8f8e10ac9ef46214fa9a99503f4301bf3926559424e8fc5c6f69592f | 2026-04-17 01:24:02.901112 | orchestrator | | host_status | None | 2026-04-17 01:24:02.901123 | orchestrator | | id | 77961589-c28c-4657-9786-b39c35d648d0 | 2026-04-17 01:24:02.901130 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 01:24:02.901140 | orchestrator | | key_name | test | 2026-04-17 01:24:02.901146 | orchestrator | | locked | False | 2026-04-17 01:24:02.901151 | orchestrator | | locked_reason | None | 2026-04-17 01:24:02.901155 | orchestrator | | name | test-2 | 2026-04-17 01:24:02.901162 | orchestrator | | pinned_availability_zone | None | 2026-04-17 01:24:02.901168 | orchestrator | | progress | 0 | 2026-04-17 01:24:02.901174 | orchestrator | | project_id | faf274d8425f4731be98c89e25352c9a | 2026-04-17 01:24:02.901180 | orchestrator | | properties | hostname='test-2' | 2026-04-17 01:24:02.901198 | orchestrator | | security_groups | name='ssh' | 2026-04-17 01:24:02.901210 | orchestrator | | | name='icmp' | 2026-04-17 01:24:02.901216 | orchestrator | | server_groups | None | 2026-04-17 01:24:02.901225 | orchestrator | | status | ACTIVE | 2026-04-17 01:24:02.901231 | orchestrator | | tags | test | 2026-04-17 01:24:02.901238 | orchestrator | | trusted_image_certificates | None | 2026-04-17 01:24:02.901244 | orchestrator | | updated | 2026-04-17T01:22:44Z | 2026-04-17 01:24:02.901250 | orchestrator | | user_id | ebc3632c5eee468b8b85becc7b1095c3 | 2026-04-17 01:24:02.901256 | orchestrator | | volumes_attached | delete_on_termination='True', id='ff70f7e6-fe92-470f-9081-3a09ba797ec9' | 2026-04-17 01:24:02.905449 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:03.151170 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-17 01:24:06.168887 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:06.169067 | orchestrator | | Field | Value | 2026-04-17 01:24:06.169083 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:06.169095 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 01:24:06.169101 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 01:24:06.169106 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 01:24:06.169111 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-17 01:24:06.169116 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 01:24:06.169121 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 01:24:06.169138 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 01:24:06.169157 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 01:24:06.169162 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 01:24:06.169167 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 01:24:06.169175 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 01:24:06.169180 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 01:24:06.169185 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 01:24:06.169190 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 01:24:06.169195 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 01:24:06.169200 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T01:22:14.000000 | 2026-04-17 01:24:06.169212 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 01:24:06.169217 | orchestrator | | accessIPv4 | | 2026-04-17 01:24:06.169222 | orchestrator | | accessIPv6 | | 2026-04-17 01:24:06.169227 | orchestrator | | addresses | test-2=192.168.112.100, 192.168.201.141 | 2026-04-17 01:24:06.169234 | orchestrator | | config_drive | | 2026-04-17 01:24:06.169240 | orchestrator | | created | 2026-04-17T01:21:46Z | 2026-04-17 01:24:06.169248 | orchestrator | | description | None | 2026-04-17 01:24:06.169256 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 01:24:06.169265 | orchestrator | | hostId | 8f8e10ac9ef46214fa9a99503f4301bf3926559424e8fc5c6f69592f | 2026-04-17 01:24:06.169278 | orchestrator | | host_status | None | 2026-04-17 01:24:06.169292 | orchestrator | | id | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | 2026-04-17 01:24:06.169301 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 01:24:06.169309 | orchestrator | | key_name | test | 2026-04-17 01:24:06.169316 | orchestrator | | locked | False | 2026-04-17 01:24:06.169328 | orchestrator | | locked_reason | None | 2026-04-17 01:24:06.169337 | orchestrator | | name | test-3 | 2026-04-17 01:24:06.169345 | orchestrator | | pinned_availability_zone | None | 2026-04-17 01:24:06.169352 | orchestrator | | progress | 0 | 2026-04-17 01:24:06.169368 | orchestrator | | project_id | faf274d8425f4731be98c89e25352c9a | 2026-04-17 01:24:06.169377 | orchestrator | | properties | hostname='test-3' | 2026-04-17 01:24:06.169391 | orchestrator | | security_groups | name='ssh' | 2026-04-17 01:24:06.169400 | orchestrator | | | name='icmp' | 2026-04-17 01:24:06.169409 | orchestrator | | server_groups | None | 2026-04-17 01:24:06.169421 | orchestrator | | status | ACTIVE | 2026-04-17 01:24:06.169430 | orchestrator | | tags | test | 2026-04-17 01:24:06.169438 | orchestrator | | trusted_image_certificates | None | 2026-04-17 01:24:06.169448 | orchestrator | | updated | 2026-04-17T01:22:45Z | 2026-04-17 01:24:06.169454 | orchestrator | | user_id | ebc3632c5eee468b8b85becc7b1095c3 | 2026-04-17 01:24:06.169465 | orchestrator | | volumes_attached | delete_on_termination='True', id='db74aae4-520d-4ba8-ac34-e9babaab665d' | 2026-04-17 01:24:06.172673 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:06.392427 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-17 01:24:09.344011 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:09.344091 | orchestrator | | Field | Value | 2026-04-17 01:24:09.344098 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:09.344103 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-17 01:24:09.344107 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-17 01:24:09.344111 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-17 01:24:09.344115 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-17 01:24:09.344133 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-17 01:24:09.344137 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-17 01:24:09.344153 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-17 01:24:09.344157 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-17 01:24:09.344161 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-17 01:24:09.344459 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-17 01:24:09.344479 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-17 01:24:09.344485 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-17 01:24:09.344492 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-17 01:24:09.344508 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-17 01:24:09.344515 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-17 01:24:09.344522 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-17T01:22:15.000000 | 2026-04-17 01:24:09.344535 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-17 01:24:09.344545 | orchestrator | | accessIPv4 | | 2026-04-17 01:24:09.344552 | orchestrator | | accessIPv6 | | 2026-04-17 01:24:09.344558 | orchestrator | | addresses | test-3=192.168.112.183, 192.168.202.101 | 2026-04-17 01:24:09.344564 | orchestrator | | config_drive | | 2026-04-17 01:24:09.344570 | orchestrator | | created | 2026-04-17T01:21:46Z | 2026-04-17 01:24:09.344583 | orchestrator | | description | None | 2026-04-17 01:24:09.344590 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-17 01:24:09.344596 | orchestrator | | hostId | 434a4323ebd1ca60b4fa64c56ea60a38e2b8e470d9865497c7c02616 | 2026-04-17 01:24:09.344603 | orchestrator | | host_status | None | 2026-04-17 01:24:09.344616 | orchestrator | | id | 2199f480-6c2c-41b5-9879-3100faa44da5 | 2026-04-17 01:24:09.344626 | orchestrator | | image | N/A (booted from volume) | 2026-04-17 01:24:09.344633 | orchestrator | | key_name | test | 2026-04-17 01:24:09.344640 | orchestrator | | locked | False | 2026-04-17 01:24:09.344647 | orchestrator | | locked_reason | None | 2026-04-17 01:24:09.344660 | orchestrator | | name | test-4 | 2026-04-17 01:24:09.344667 | orchestrator | | pinned_availability_zone | None | 2026-04-17 01:24:09.344671 | orchestrator | | progress | 0 | 2026-04-17 01:24:09.344676 | orchestrator | | project_id | faf274d8425f4731be98c89e25352c9a | 2026-04-17 01:24:09.344680 | orchestrator | | properties | hostname='test-4' | 2026-04-17 01:24:09.344691 | orchestrator | | security_groups | name='ssh' | 2026-04-17 01:24:09.344699 | orchestrator | | | name='icmp' | 2026-04-17 01:24:09.344704 | orchestrator | | server_groups | None | 2026-04-17 01:24:09.344709 | orchestrator | | status | ACTIVE | 2026-04-17 01:24:09.344713 | orchestrator | | tags | test | 2026-04-17 01:24:09.344721 | orchestrator | | trusted_image_certificates | None | 2026-04-17 01:24:09.344727 | orchestrator | | updated | 2026-04-17T01:22:46Z | 2026-04-17 01:24:09.344734 | orchestrator | | user_id | ebc3632c5eee468b8b85becc7b1095c3 | 2026-04-17 01:24:09.344741 | orchestrator | | volumes_attached | delete_on_termination='True', id='2bf112ca-01cf-4ef2-bfde-b1500a2eee58' | 2026-04-17 01:24:09.349470 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-17 01:24:09.577865 | orchestrator | + server_ping 2026-04-17 01:24:09.578815 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 01:24:09.578866 | orchestrator | ++ tr -d '\r' 2026-04-17 01:24:12.328886 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:24:12.329063 | orchestrator | + ping -c3 192.168.112.131 2026-04-17 01:24:12.342821 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-17 01:24:12.342919 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=7.30 ms 2026-04-17 01:24:13.339345 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.10 ms 2026-04-17 01:24:14.341416 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-17 01:24:14.341510 | orchestrator | 2026-04-17 01:24:14.341522 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-17 01:24:14.341550 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:24:14.341557 | orchestrator | rtt min/avg/max/mdev = 1.837/3.746/7.304/2.517 ms 2026-04-17 01:24:14.341818 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:24:14.341838 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 01:24:14.354853 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 01:24:14.355055 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=8.79 ms 2026-04-17 01:24:15.350492 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.38 ms 2026-04-17 01:24:16.352293 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-17 01:24:16.352382 | orchestrator | 2026-04-17 01:24:16.352390 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 01:24:16.352397 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-17 01:24:16.352403 | orchestrator | rtt min/avg/max/mdev = 1.713/4.293/8.790/3.191 ms 2026-04-17 01:24:16.353004 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:24:16.353036 | orchestrator | + ping -c3 192.168.112.183 2026-04-17 01:24:16.365296 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2026-04-17 01:24:16.365398 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=7.97 ms 2026-04-17 01:24:17.361176 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.76 ms 2026-04-17 01:24:18.363416 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=2.29 ms 2026-04-17 01:24:18.363504 | orchestrator | 2026-04-17 01:24:18.363515 | orchestrator | --- 192.168.112.183 ping statistics --- 2026-04-17 01:24:18.363524 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:24:18.363531 | orchestrator | rtt min/avg/max/mdev = 2.289/4.340/7.970/2.573 ms 2026-04-17 01:24:18.363539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:24:18.363547 | orchestrator | + ping -c3 192.168.112.100 2026-04-17 01:24:18.376401 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-17 01:24:18.376497 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.20 ms 2026-04-17 01:24:19.373074 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-17 01:24:20.372843 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.53 ms 2026-04-17 01:24:20.372932 | orchestrator | 2026-04-17 01:24:20.372940 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-17 01:24:20.372945 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:24:20.372949 | orchestrator | rtt min/avg/max/mdev = 1.534/3.642/7.197/2.528 ms 2026-04-17 01:24:20.373440 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:24:20.373459 | orchestrator | + ping -c3 192.168.112.187 2026-04-17 01:24:20.382923 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-04-17 01:24:20.383044 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=6.17 ms 2026-04-17 01:24:21.380632 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.32 ms 2026-04-17 01:24:22.382366 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-17 01:24:22.382465 | orchestrator | 2026-04-17 01:24:22.382479 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-04-17 01:24:22.382489 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:24:22.382498 | orchestrator | rtt min/avg/max/mdev = 1.701/3.396/6.171/1.978 ms 2026-04-17 01:24:22.382791 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-17 01:24:22.382808 | orchestrator | + compute_list 2026-04-17 01:24:22.382814 | orchestrator | + osism manage compute list testbed-node-3 2026-04-17 01:24:26.867738 | orchestrator | +------+--------+----------+ 2026-04-17 01:24:26.867830 | orchestrator | | ID | Name | Status | 2026-04-17 01:24:26.867840 | orchestrator | |------+--------+----------| 2026-04-17 01:24:26.867851 | orchestrator | +------+--------+----------+ 2026-04-17 01:24:27.162140 | orchestrator | + osism manage compute list testbed-node-4 2026-04-17 01:24:30.585867 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:24:30.585948 | orchestrator | | ID | Name | Status | 2026-04-17 01:24:30.585986 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:24:30.585991 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | 2026-04-17 01:24:30.585995 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:24:30.870781 | orchestrator | + osism manage compute list testbed-node-5 2026-04-17 01:24:34.232705 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:24:34.232821 | orchestrator | | ID | Name | Status | 2026-04-17 01:24:34.232835 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:24:34.232862 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | 2026-04-17 01:24:34.232867 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | 2026-04-17 01:24:34.232871 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | 2026-04-17 01:24:34.232876 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | 2026-04-17 01:24:34.232881 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:24:34.502230 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-17 01:24:37.585571 | orchestrator | 2026-04-17 01:24:37 | INFO  | Live migrating server 2199f480-6c2c-41b5-9879-3100faa44da5 2026-04-17 01:24:51.536491 | orchestrator | 2026-04-17 01:24:51 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:24:53.928453 | orchestrator | 2026-04-17 01:24:53 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:24:56.363134 | orchestrator | 2026-04-17 01:24:56 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:24:58.726773 | orchestrator | 2026-04-17 01:24:58 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:01.465310 | orchestrator | 2026-04-17 01:25:01 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:04.209300 | orchestrator | 2026-04-17 01:25:04 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:06.593731 | orchestrator | 2026-04-17 01:25:06 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:08.951113 | orchestrator | 2026-04-17 01:25:08 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:11.240081 | orchestrator | 2026-04-17 01:25:11 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:13.606711 | orchestrator | 2026-04-17 01:25:13 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:15.935918 | orchestrator | 2026-04-17 01:25:15 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:25:18.307851 | orchestrator | 2026-04-17 01:25:18 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) completed with status ACTIVE 2026-04-17 01:25:18.585725 | orchestrator | + compute_list 2026-04-17 01:25:18.585858 | orchestrator | + osism manage compute list testbed-node-3 2026-04-17 01:25:21.637271 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:25:21.637370 | orchestrator | | ID | Name | Status | 2026-04-17 01:25:21.637386 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:25:21.637393 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | 2026-04-17 01:25:21.637399 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:25:21.921411 | orchestrator | + osism manage compute list testbed-node-4 2026-04-17 01:25:24.604597 | orchestrator | +------+--------+----------+ 2026-04-17 01:25:24.604675 | orchestrator | | ID | Name | Status | 2026-04-17 01:25:24.604681 | orchestrator | |------+--------+----------| 2026-04-17 01:25:24.604686 | orchestrator | +------+--------+----------+ 2026-04-17 01:25:24.862100 | orchestrator | + osism manage compute list testbed-node-5 2026-04-17 01:25:27.924854 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:25:27.925013 | orchestrator | | ID | Name | Status | 2026-04-17 01:25:27.925031 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:25:27.925043 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | 2026-04-17 01:25:27.925082 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | 2026-04-17 01:25:27.925093 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | 2026-04-17 01:25:27.925105 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | 2026-04-17 01:25:27.925116 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:25:28.197032 | orchestrator | + server_ping 2026-04-17 01:25:28.198070 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 01:25:28.198520 | orchestrator | ++ tr -d '\r' 2026-04-17 01:25:30.897956 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:25:30.898115 | orchestrator | + ping -c3 192.168.112.131 2026-04-17 01:25:30.908551 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-17 01:25:30.908656 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=8.12 ms 2026-04-17 01:25:31.903839 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=1.95 ms 2026-04-17 01:25:32.905388 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.42 ms 2026-04-17 01:25:32.905462 | orchestrator | 2026-04-17 01:25:32.905471 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-17 01:25:32.905478 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:25:32.905483 | orchestrator | rtt min/avg/max/mdev = 1.415/3.828/8.119/3.041 ms 2026-04-17 01:25:32.905489 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:25:32.905496 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 01:25:32.917354 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 01:25:32.917425 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.57 ms 2026-04-17 01:25:33.914077 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.37 ms 2026-04-17 01:25:34.916258 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.30 ms 2026-04-17 01:25:34.916351 | orchestrator | 2026-04-17 01:25:34.916359 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 01:25:34.916364 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:25:34.916369 | orchestrator | rtt min/avg/max/mdev = 2.299/4.080/7.572/2.468 ms 2026-04-17 01:25:34.916374 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:25:34.916380 | orchestrator | + ping -c3 192.168.112.183 2026-04-17 01:25:34.927845 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2026-04-17 01:25:34.927914 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=7.33 ms 2026-04-17 01:25:35.924387 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.13 ms 2026-04-17 01:25:36.925032 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=1.36 ms 2026-04-17 01:25:36.925124 | orchestrator | 2026-04-17 01:25:36.925134 | orchestrator | --- 192.168.112.183 ping statistics --- 2026-04-17 01:25:36.925143 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:25:36.925149 | orchestrator | rtt min/avg/max/mdev = 1.364/3.609/7.332/2.651 ms 2026-04-17 01:25:36.925292 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:25:36.925305 | orchestrator | + ping -c3 192.168.112.100 2026-04-17 01:25:36.938606 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-17 01:25:36.938680 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=8.33 ms 2026-04-17 01:25:37.934380 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-17 01:25:38.935845 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-17 01:25:38.935910 | orchestrator | 2026-04-17 01:25:38.935917 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-17 01:25:38.935969 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:25:38.935975 | orchestrator | rtt min/avg/max/mdev = 1.683/4.119/8.330/2.989 ms 2026-04-17 01:25:38.935979 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:25:38.936002 | orchestrator | + ping -c3 192.168.112.187 2026-04-17 01:25:38.950480 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-04-17 01:25:38.950598 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=9.26 ms 2026-04-17 01:25:39.944523 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.02 ms 2026-04-17 01:25:40.946255 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.72 ms 2026-04-17 01:25:40.946329 | orchestrator | 2026-04-17 01:25:40.946336 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-04-17 01:25:40.946342 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:25:40.946346 | orchestrator | rtt min/avg/max/mdev = 1.717/4.329/9.256/3.485 ms 2026-04-17 01:25:40.946351 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-17 01:25:44.020466 | orchestrator | 2026-04-17 01:25:44 | INFO  | Live migrating server 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 2026-04-17 01:25:56.406420 | orchestrator | 2026-04-17 01:25:56 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:25:58.725679 | orchestrator | 2026-04-17 01:25:58 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:01.093229 | orchestrator | 2026-04-17 01:26:01 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:03.368399 | orchestrator | 2026-04-17 01:26:03 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:05.630318 | orchestrator | 2026-04-17 01:26:05 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:08.003812 | orchestrator | 2026-04-17 01:26:08 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:10.274512 | orchestrator | 2026-04-17 01:26:10 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:12.605370 | orchestrator | 2026-04-17 01:26:12 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:14.965826 | orchestrator | 2026-04-17 01:26:14 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:26:17.223844 | orchestrator | 2026-04-17 01:26:17 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) completed with status ACTIVE 2026-04-17 01:26:17.223967 | orchestrator | 2026-04-17 01:26:17 | INFO  | Live migrating server 77961589-c28c-4657-9786-b39c35d648d0 2026-04-17 01:26:28.785661 | orchestrator | 2026-04-17 01:26:28 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:31.072899 | orchestrator | 2026-04-17 01:26:31 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:33.432666 | orchestrator | 2026-04-17 01:26:33 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:35.677630 | orchestrator | 2026-04-17 01:26:35 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:37.964204 | orchestrator | 2026-04-17 01:26:37 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:40.260333 | orchestrator | 2026-04-17 01:26:40 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:42.576263 | orchestrator | 2026-04-17 01:26:42 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:44.858260 | orchestrator | 2026-04-17 01:26:44 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:26:47.362123 | orchestrator | 2026-04-17 01:26:47 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) completed with status ACTIVE 2026-04-17 01:26:47.362196 | orchestrator | 2026-04-17 01:26:47 | INFO  | Live migrating server d0ae4266-ef8b-410a-ac7e-6835e94eeda1 2026-04-17 01:26:59.982067 | orchestrator | 2026-04-17 01:26:59 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:02.382326 | orchestrator | 2026-04-17 01:27:02 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:04.904400 | orchestrator | 2026-04-17 01:27:04 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:07.267484 | orchestrator | 2026-04-17 01:27:07 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:09.675087 | orchestrator | 2026-04-17 01:27:09 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:11.969559 | orchestrator | 2026-04-17 01:27:11 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:14.269512 | orchestrator | 2026-04-17 01:27:14 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:16.571674 | orchestrator | 2026-04-17 01:27:16 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:27:19.134199 | orchestrator | 2026-04-17 01:27:19 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) completed with status ACTIVE 2026-04-17 01:27:19.134290 | orchestrator | 2026-04-17 01:27:19 | INFO  | Live migrating server f238b33a-4185-4bb2-b90b-fa645cbc738c 2026-04-17 01:27:31.318704 | orchestrator | 2026-04-17 01:27:31 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:33.689959 | orchestrator | 2026-04-17 01:27:33 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:36.080067 | orchestrator | 2026-04-17 01:27:36 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:38.455409 | orchestrator | 2026-04-17 01:27:38 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:40.848948 | orchestrator | 2026-04-17 01:27:40 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:43.147693 | orchestrator | 2026-04-17 01:27:43 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:45.451619 | orchestrator | 2026-04-17 01:27:45 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:47.730400 | orchestrator | 2026-04-17 01:27:47 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:50.109653 | orchestrator | 2026-04-17 01:27:50 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:52.501402 | orchestrator | 2026-04-17 01:27:52 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:27:54.846388 | orchestrator | 2026-04-17 01:27:54 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) completed with status ACTIVE 2026-04-17 01:27:55.118082 | orchestrator | + compute_list 2026-04-17 01:27:55.118186 | orchestrator | + osism manage compute list testbed-node-3 2026-04-17 01:27:58.345983 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:27:58.346100 | orchestrator | | ID | Name | Status | 2026-04-17 01:27:58.346114 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:27:58.346151 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | 2026-04-17 01:27:58.346157 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | 2026-04-17 01:27:58.346161 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | 2026-04-17 01:27:58.346166 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | 2026-04-17 01:27:58.346171 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | 2026-04-17 01:27:58.346176 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:27:58.607981 | orchestrator | + osism manage compute list testbed-node-4 2026-04-17 01:28:01.336155 | orchestrator | +------+--------+----------+ 2026-04-17 01:28:01.336259 | orchestrator | | ID | Name | Status | 2026-04-17 01:28:01.336272 | orchestrator | |------+--------+----------| 2026-04-17 01:28:01.336281 | orchestrator | +------+--------+----------+ 2026-04-17 01:28:01.588966 | orchestrator | + osism manage compute list testbed-node-5 2026-04-17 01:28:04.346270 | orchestrator | +------+--------+----------+ 2026-04-17 01:28:04.346434 | orchestrator | | ID | Name | Status | 2026-04-17 01:28:04.346468 | orchestrator | |------+--------+----------| 2026-04-17 01:28:04.346492 | orchestrator | +------+--------+----------+ 2026-04-17 01:28:04.610345 | orchestrator | + server_ping 2026-04-17 01:28:04.611179 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 01:28:04.612209 | orchestrator | ++ tr -d '\r' 2026-04-17 01:28:07.244024 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:28:07.244115 | orchestrator | + ping -c3 192.168.112.131 2026-04-17 01:28:07.253496 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-17 01:28:07.253589 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=5.14 ms 2026-04-17 01:28:08.251059 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=1.79 ms 2026-04-17 01:28:09.252691 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.48 ms 2026-04-17 01:28:09.252783 | orchestrator | 2026-04-17 01:28:09.252792 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-17 01:28:09.252798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:28:09.252802 | orchestrator | rtt min/avg/max/mdev = 1.475/2.803/5.141/1.658 ms 2026-04-17 01:28:09.253560 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:28:09.253609 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 01:28:09.263542 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 01:28:09.263623 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.93 ms 2026-04-17 01:28:10.260104 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.13 ms 2026-04-17 01:28:11.261992 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.88 ms 2026-04-17 01:28:11.262151 | orchestrator | 2026-04-17 01:28:11.262165 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 01:28:11.262173 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:28:11.262181 | orchestrator | rtt min/avg/max/mdev = 1.884/3.646/6.926/2.321 ms 2026-04-17 01:28:11.262664 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:28:11.262742 | orchestrator | + ping -c3 192.168.112.183 2026-04-17 01:28:11.275721 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2026-04-17 01:28:11.275821 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=9.04 ms 2026-04-17 01:28:12.269378 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.32 ms 2026-04-17 01:28:13.269961 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=1.43 ms 2026-04-17 01:28:13.270097 | orchestrator | 2026-04-17 01:28:13.270109 | orchestrator | --- 192.168.112.183 ping statistics --- 2026-04-17 01:28:13.270117 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:28:13.270145 | orchestrator | rtt min/avg/max/mdev = 1.433/4.263/9.035/3.393 ms 2026-04-17 01:28:13.270552 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:28:13.270576 | orchestrator | + ping -c3 192.168.112.100 2026-04-17 01:28:13.279715 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-17 01:28:13.279808 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=4.69 ms 2026-04-17 01:28:14.279311 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.33 ms 2026-04-17 01:28:15.280753 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-17 01:28:15.280847 | orchestrator | 2026-04-17 01:28:15.280856 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-17 01:28:15.280929 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:28:15.280936 | orchestrator | rtt min/avg/max/mdev = 1.714/2.909/4.688/1.282 ms 2026-04-17 01:28:15.281214 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:28:15.281233 | orchestrator | + ping -c3 192.168.112.187 2026-04-17 01:28:15.290268 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-04-17 01:28:15.290352 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=5.04 ms 2026-04-17 01:28:16.289829 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.85 ms 2026-04-17 01:28:17.289917 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.75 ms 2026-04-17 01:28:17.290205 | orchestrator | 2026-04-17 01:28:17.290236 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-04-17 01:28:17.290250 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:28:17.290261 | orchestrator | rtt min/avg/max/mdev = 1.749/3.213/5.042/1.368 ms 2026-04-17 01:28:17.290425 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-17 01:28:20.501160 | orchestrator | 2026-04-17 01:28:20 | INFO  | Live migrating server 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 2026-04-17 01:28:32.343947 | orchestrator | 2026-04-17 01:28:32 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:34.720648 | orchestrator | 2026-04-17 01:28:34 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:37.117583 | orchestrator | 2026-04-17 01:28:37 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:39.455708 | orchestrator | 2026-04-17 01:28:39 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:41.820812 | orchestrator | 2026-04-17 01:28:41 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:44.122747 | orchestrator | 2026-04-17 01:28:44 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:46.435754 | orchestrator | 2026-04-17 01:28:46 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:48.829189 | orchestrator | 2026-04-17 01:28:48 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:28:51.153238 | orchestrator | 2026-04-17 01:28:51 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) completed with status ACTIVE 2026-04-17 01:28:51.153322 | orchestrator | 2026-04-17 01:28:51 | INFO  | Live migrating server 2199f480-6c2c-41b5-9879-3100faa44da5 2026-04-17 01:29:01.098769 | orchestrator | 2026-04-17 01:29:01 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:03.446504 | orchestrator | 2026-04-17 01:29:03 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:05.777525 | orchestrator | 2026-04-17 01:29:05 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:08.113181 | orchestrator | 2026-04-17 01:29:08 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:10.480100 | orchestrator | 2026-04-17 01:29:10 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:12.823349 | orchestrator | 2026-04-17 01:29:12 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:15.177533 | orchestrator | 2026-04-17 01:29:15 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:17.560766 | orchestrator | 2026-04-17 01:29:17 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:19.850230 | orchestrator | 2026-04-17 01:29:19 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:29:22.124196 | orchestrator | 2026-04-17 01:29:22 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) completed with status ACTIVE 2026-04-17 01:29:22.124288 | orchestrator | 2026-04-17 01:29:22 | INFO  | Live migrating server 77961589-c28c-4657-9786-b39c35d648d0 2026-04-17 01:29:32.335859 | orchestrator | 2026-04-17 01:29:32 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:34.717724 | orchestrator | 2026-04-17 01:29:34 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:37.092150 | orchestrator | 2026-04-17 01:29:37 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:39.403927 | orchestrator | 2026-04-17 01:29:39 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:41.674051 | orchestrator | 2026-04-17 01:29:41 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:43.945169 | orchestrator | 2026-04-17 01:29:43 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:46.297990 | orchestrator | 2026-04-17 01:29:46 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:48.659908 | orchestrator | 2026-04-17 01:29:48 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:29:50.938372 | orchestrator | 2026-04-17 01:29:50 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) completed with status ACTIVE 2026-04-17 01:29:50.938469 | orchestrator | 2026-04-17 01:29:50 | INFO  | Live migrating server d0ae4266-ef8b-410a-ac7e-6835e94eeda1 2026-04-17 01:30:00.751958 | orchestrator | 2026-04-17 01:30:00 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:03.123155 | orchestrator | 2026-04-17 01:30:03 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:05.466996 | orchestrator | 2026-04-17 01:30:05 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:07.838644 | orchestrator | 2026-04-17 01:30:07 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:10.103740 | orchestrator | 2026-04-17 01:30:10 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:12.414909 | orchestrator | 2026-04-17 01:30:12 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:14.678108 | orchestrator | 2026-04-17 01:30:14 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:17.022057 | orchestrator | 2026-04-17 01:30:17 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:19.351200 | orchestrator | 2026-04-17 01:30:19 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:30:21.653790 | orchestrator | 2026-04-17 01:30:21 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) completed with status ACTIVE 2026-04-17 01:30:21.653905 | orchestrator | 2026-04-17 01:30:21 | INFO  | Live migrating server f238b33a-4185-4bb2-b90b-fa645cbc738c 2026-04-17 01:30:32.794711 | orchestrator | 2026-04-17 01:30:32 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:35.169117 | orchestrator | 2026-04-17 01:30:35 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:37.517857 | orchestrator | 2026-04-17 01:30:37 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:39.877307 | orchestrator | 2026-04-17 01:30:39 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:42.166265 | orchestrator | 2026-04-17 01:30:42 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:44.469635 | orchestrator | 2026-04-17 01:30:44 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:46.779546 | orchestrator | 2026-04-17 01:30:46 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:49.194177 | orchestrator | 2026-04-17 01:30:49 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:51.509490 | orchestrator | 2026-04-17 01:30:51 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:30:53.756304 | orchestrator | 2026-04-17 01:30:53 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) completed with status ACTIVE 2026-04-17 01:30:54.034368 | orchestrator | + compute_list 2026-04-17 01:30:54.034440 | orchestrator | + osism manage compute list testbed-node-3 2026-04-17 01:30:56.793289 | orchestrator | +------+--------+----------+ 2026-04-17 01:30:56.793363 | orchestrator | | ID | Name | Status | 2026-04-17 01:30:56.793371 | orchestrator | |------+--------+----------| 2026-04-17 01:30:56.793378 | orchestrator | +------+--------+----------+ 2026-04-17 01:30:57.053246 | orchestrator | + osism manage compute list testbed-node-4 2026-04-17 01:31:00.280042 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:31:00.280142 | orchestrator | | ID | Name | Status | 2026-04-17 01:31:00.280153 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:31:00.280160 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | 2026-04-17 01:31:00.280167 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | 2026-04-17 01:31:00.280175 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | 2026-04-17 01:31:00.280181 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | 2026-04-17 01:31:00.280188 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | 2026-04-17 01:31:00.280195 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:31:00.600028 | orchestrator | + osism manage compute list testbed-node-5 2026-04-17 01:31:03.364061 | orchestrator | +------+--------+----------+ 2026-04-17 01:31:03.364153 | orchestrator | | ID | Name | Status | 2026-04-17 01:31:03.364160 | orchestrator | |------+--------+----------| 2026-04-17 01:31:03.364165 | orchestrator | +------+--------+----------+ 2026-04-17 01:31:03.634001 | orchestrator | + server_ping 2026-04-17 01:31:03.634701 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 01:31:03.634735 | orchestrator | ++ tr -d '\r' 2026-04-17 01:31:06.554242 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:31:06.554324 | orchestrator | + ping -c3 192.168.112.131 2026-04-17 01:31:06.564165 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-17 01:31:06.564255 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=6.40 ms 2026-04-17 01:31:07.562230 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.22 ms 2026-04-17 01:31:08.564381 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.77 ms 2026-04-17 01:31:08.564511 | orchestrator | 2026-04-17 01:31:08.564540 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-17 01:31:08.564559 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:31:08.564572 | orchestrator | rtt min/avg/max/mdev = 1.773/3.461/6.396/2.082 ms 2026-04-17 01:31:08.564584 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:31:08.564596 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 01:31:08.574312 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 01:31:08.574417 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.09 ms 2026-04-17 01:31:09.572612 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.97 ms 2026-04-17 01:31:10.572861 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.58 ms 2026-04-17 01:31:10.573844 | orchestrator | 2026-04-17 01:31:10.573915 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 01:31:10.573931 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:31:10.573943 | orchestrator | rtt min/avg/max/mdev = 1.582/3.549/6.093/1.886 ms 2026-04-17 01:31:10.573955 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:31:10.573965 | orchestrator | + ping -c3 192.168.112.183 2026-04-17 01:31:10.584318 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2026-04-17 01:31:10.584388 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=7.70 ms 2026-04-17 01:31:11.581985 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.75 ms 2026-04-17 01:31:12.582793 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=1.77 ms 2026-04-17 01:31:12.582903 | orchestrator | 2026-04-17 01:31:12.582918 | orchestrator | --- 192.168.112.183 ping statistics --- 2026-04-17 01:31:12.582928 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-17 01:31:12.582936 | orchestrator | rtt min/avg/max/mdev = 1.768/4.070/7.696/2.594 ms 2026-04-17 01:31:12.583012 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:31:12.583024 | orchestrator | + ping -c3 192.168.112.100 2026-04-17 01:31:12.595821 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-17 01:31:12.595939 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.58 ms 2026-04-17 01:31:13.592935 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.63 ms 2026-04-17 01:31:14.594818 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.11 ms 2026-04-17 01:31:14.594901 | orchestrator | 2026-04-17 01:31:14.594912 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-17 01:31:14.594922 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-17 01:31:14.594930 | orchestrator | rtt min/avg/max/mdev = 2.106/4.105/7.575/2.463 ms 2026-04-17 01:31:14.594939 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:31:14.594947 | orchestrator | + ping -c3 192.168.112.187 2026-04-17 01:31:14.606290 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-04-17 01:31:14.606372 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=7.17 ms 2026-04-17 01:31:15.603605 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.98 ms 2026-04-17 01:31:16.603235 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.14 ms 2026-04-17 01:31:16.603312 | orchestrator | 2026-04-17 01:31:16.603321 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-04-17 01:31:16.603335 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:31:16.603340 | orchestrator | rtt min/avg/max/mdev = 2.137/4.095/7.169/2.200 ms 2026-04-17 01:31:16.603564 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-17 01:31:19.679344 | orchestrator | 2026-04-17 01:31:19 | INFO  | Live migrating server 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 2026-04-17 01:31:29.409955 | orchestrator | 2026-04-17 01:31:29 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:31.755994 | orchestrator | 2026-04-17 01:31:31 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:34.101129 | orchestrator | 2026-04-17 01:31:34 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:36.463333 | orchestrator | 2026-04-17 01:31:36 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:38.842585 | orchestrator | 2026-04-17 01:31:38 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:41.141437 | orchestrator | 2026-04-17 01:31:41 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:43.486675 | orchestrator | 2026-04-17 01:31:43 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:45.852741 | orchestrator | 2026-04-17 01:31:45 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) is still in progress 2026-04-17 01:31:48.343929 | orchestrator | 2026-04-17 01:31:48 | INFO  | Live migration of 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 (test-3) completed with status ACTIVE 2026-04-17 01:31:48.344000 | orchestrator | 2026-04-17 01:31:48 | INFO  | Live migrating server 2199f480-6c2c-41b5-9879-3100faa44da5 2026-04-17 01:31:58.001856 | orchestrator | 2026-04-17 01:31:58 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:00.389358 | orchestrator | 2026-04-17 01:32:00 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:02.877202 | orchestrator | 2026-04-17 01:32:02 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:05.158513 | orchestrator | 2026-04-17 01:32:05 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:07.583569 | orchestrator | 2026-04-17 01:32:07 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:09.846180 | orchestrator | 2026-04-17 01:32:09 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:12.266823 | orchestrator | 2026-04-17 01:32:12 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:14.568739 | orchestrator | 2026-04-17 01:32:14 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) is still in progress 2026-04-17 01:32:16.883352 | orchestrator | 2026-04-17 01:32:16 | INFO  | Live migration of 2199f480-6c2c-41b5-9879-3100faa44da5 (test-4) completed with status ACTIVE 2026-04-17 01:32:16.883443 | orchestrator | 2026-04-17 01:32:16 | INFO  | Live migrating server 77961589-c28c-4657-9786-b39c35d648d0 2026-04-17 01:32:28.724004 | orchestrator | 2026-04-17 01:32:28 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:31.036483 | orchestrator | 2026-04-17 01:32:31 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:33.407361 | orchestrator | 2026-04-17 01:32:33 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:35.850646 | orchestrator | 2026-04-17 01:32:35 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:38.266353 | orchestrator | 2026-04-17 01:32:38 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:40.569431 | orchestrator | 2026-04-17 01:32:40 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:42.916963 | orchestrator | 2026-04-17 01:32:42 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:45.270428 | orchestrator | 2026-04-17 01:32:45 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) is still in progress 2026-04-17 01:32:47.533716 | orchestrator | 2026-04-17 01:32:47 | INFO  | Live migration of 77961589-c28c-4657-9786-b39c35d648d0 (test-2) completed with status ACTIVE 2026-04-17 01:32:47.533801 | orchestrator | 2026-04-17 01:32:47 | INFO  | Live migrating server d0ae4266-ef8b-410a-ac7e-6835e94eeda1 2026-04-17 01:32:57.649573 | orchestrator | 2026-04-17 01:32:57 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:32:59.977862 | orchestrator | 2026-04-17 01:32:59 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:02.330820 | orchestrator | 2026-04-17 01:33:02 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:04.624749 | orchestrator | 2026-04-17 01:33:04 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:06.900242 | orchestrator | 2026-04-17 01:33:06 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:09.296531 | orchestrator | 2026-04-17 01:33:09 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:11.567775 | orchestrator | 2026-04-17 01:33:11 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:13.802443 | orchestrator | 2026-04-17 01:33:13 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) is still in progress 2026-04-17 01:33:16.092647 | orchestrator | 2026-04-17 01:33:16 | INFO  | Live migration of d0ae4266-ef8b-410a-ac7e-6835e94eeda1 (test-1) completed with status ACTIVE 2026-04-17 01:33:16.092723 | orchestrator | 2026-04-17 01:33:16 | INFO  | Live migrating server f238b33a-4185-4bb2-b90b-fa645cbc738c 2026-04-17 01:33:26.296447 | orchestrator | 2026-04-17 01:33:26 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:28.613334 | orchestrator | 2026-04-17 01:33:28 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:30.961282 | orchestrator | 2026-04-17 01:33:30 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:33.299162 | orchestrator | 2026-04-17 01:33:33 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:35.634602 | orchestrator | 2026-04-17 01:33:35 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:37.920014 | orchestrator | 2026-04-17 01:33:37 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:40.216090 | orchestrator | 2026-04-17 01:33:40 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:42.515340 | orchestrator | 2026-04-17 01:33:42 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:44.843461 | orchestrator | 2026-04-17 01:33:44 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:47.140267 | orchestrator | 2026-04-17 01:33:47 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) is still in progress 2026-04-17 01:33:49.443176 | orchestrator | 2026-04-17 01:33:49 | INFO  | Live migration of f238b33a-4185-4bb2-b90b-fa645cbc738c (test) completed with status ACTIVE 2026-04-17 01:33:49.741126 | orchestrator | + compute_list 2026-04-17 01:33:49.741211 | orchestrator | + osism manage compute list testbed-node-3 2026-04-17 01:33:52.375740 | orchestrator | +------+--------+----------+ 2026-04-17 01:33:52.375819 | orchestrator | | ID | Name | Status | 2026-04-17 01:33:52.375825 | orchestrator | |------+--------+----------| 2026-04-17 01:33:52.375830 | orchestrator | +------+--------+----------+ 2026-04-17 01:33:52.657687 | orchestrator | + osism manage compute list testbed-node-4 2026-04-17 01:33:55.440934 | orchestrator | +------+--------+----------+ 2026-04-17 01:33:55.441036 | orchestrator | | ID | Name | Status | 2026-04-17 01:33:55.441044 | orchestrator | |------+--------+----------| 2026-04-17 01:33:55.441049 | orchestrator | +------+--------+----------+ 2026-04-17 01:33:55.718743 | orchestrator | + osism manage compute list testbed-node-5 2026-04-17 01:33:58.762546 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:33:58.762642 | orchestrator | | ID | Name | Status | 2026-04-17 01:33:58.762655 | orchestrator | |--------------------------------------+--------+----------| 2026-04-17 01:33:58.762661 | orchestrator | | 194f4ed7-c4bc-4c67-8ca3-2c341ca1a0c5 | test-3 | ACTIVE | 2026-04-17 01:33:58.762668 | orchestrator | | 2199f480-6c2c-41b5-9879-3100faa44da5 | test-4 | ACTIVE | 2026-04-17 01:33:58.762675 | orchestrator | | 77961589-c28c-4657-9786-b39c35d648d0 | test-2 | ACTIVE | 2026-04-17 01:33:58.762682 | orchestrator | | d0ae4266-ef8b-410a-ac7e-6835e94eeda1 | test-1 | ACTIVE | 2026-04-17 01:33:58.762688 | orchestrator | | f238b33a-4185-4bb2-b90b-fa645cbc738c | test | ACTIVE | 2026-04-17 01:33:58.762695 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-17 01:33:59.045096 | orchestrator | + server_ping 2026-04-17 01:33:59.046226 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-17 01:33:59.046366 | orchestrator | ++ tr -d '\r' 2026-04-17 01:34:01.744773 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:34:01.744865 | orchestrator | + ping -c3 192.168.112.131 2026-04-17 01:34:01.755204 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-17 01:34:01.755358 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=8.22 ms 2026-04-17 01:34:02.750587 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-17 01:34:03.751166 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.28 ms 2026-04-17 01:34:03.751266 | orchestrator | 2026-04-17 01:34:03.751279 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-17 01:34:03.751328 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:34:03.751337 | orchestrator | rtt min/avg/max/mdev = 1.280/3.844/8.220/3.109 ms 2026-04-17 01:34:03.751585 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:34:03.751603 | orchestrator | + ping -c3 192.168.112.109 2026-04-17 01:34:03.762007 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-04-17 01:34:03.762161 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=5.56 ms 2026-04-17 01:34:04.759374 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.08 ms 2026-04-17 01:34:05.761359 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-17 01:34:05.761475 | orchestrator | 2026-04-17 01:34:05.761492 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-04-17 01:34:05.761504 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:34:05.761514 | orchestrator | rtt min/avg/max/mdev = 1.952/3.195/5.556/1.670 ms 2026-04-17 01:34:05.761787 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:34:05.761815 | orchestrator | + ping -c3 192.168.112.183 2026-04-17 01:34:05.775629 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2026-04-17 01:34:05.775706 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=9.63 ms 2026-04-17 01:34:06.769740 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-17 01:34:07.771171 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=1.88 ms 2026-04-17 01:34:07.771264 | orchestrator | 2026-04-17 01:34:07.771276 | orchestrator | --- 192.168.112.183 ping statistics --- 2026-04-17 01:34:07.771285 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:34:07.771292 | orchestrator | rtt min/avg/max/mdev = 1.879/4.513/9.628/3.617 ms 2026-04-17 01:34:07.771731 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:34:07.771748 | orchestrator | + ping -c3 192.168.112.100 2026-04-17 01:34:07.783988 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-17 01:34:07.784098 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=8.19 ms 2026-04-17 01:34:08.779601 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.35 ms 2026-04-17 01:34:09.781211 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-17 01:34:09.781325 | orchestrator | 2026-04-17 01:34:09.781338 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-17 01:34:09.781345 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-17 01:34:09.781351 | orchestrator | rtt min/avg/max/mdev = 1.781/4.108/8.190/2.895 ms 2026-04-17 01:34:09.781358 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-17 01:34:09.781365 | orchestrator | + ping -c3 192.168.112.187 2026-04-17 01:34:09.794669 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2026-04-17 01:34:09.794753 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.90 ms 2026-04-17 01:34:10.788820 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.55 ms 2026-04-17 01:34:11.790178 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.93 ms 2026-04-17 01:34:11.790914 | orchestrator | 2026-04-17 01:34:11.790959 | orchestrator | --- 192.168.112.187 ping statistics --- 2026-04-17 01:34:11.790970 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-17 01:34:11.790977 | orchestrator | rtt min/avg/max/mdev = 1.928/4.460/8.904/3.152 ms 2026-04-17 01:34:12.011895 | orchestrator | ok: Runtime: 0:18:30.233858 2026-04-17 01:34:12.067781 | 2026-04-17 01:34:12.067944 | TASK [Run tempest] 2026-04-17 01:34:12.876442 | orchestrator | 2026-04-17 01:34:12.876761 | orchestrator | # Tempest 2026-04-17 01:34:12.876778 | orchestrator | 2026-04-17 01:34:12.876784 | orchestrator | + set -e 2026-04-17 01:34:12.876788 | orchestrator | + set -o pipefail 2026-04-17 01:34:12.876797 | orchestrator | + source /opt/manager-vars.sh 2026-04-17 01:34:12.876808 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-17 01:34:12.876834 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-17 01:34:12.876844 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-17 01:34:12.876850 | orchestrator | ++ CEPH_VERSION=reef 2026-04-17 01:34:12.876856 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-17 01:34:12.876862 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-17 01:34:12.876868 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-17 01:34:12.876876 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-17 01:34:12.876880 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-17 01:34:12.876888 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-17 01:34:12.876891 | orchestrator | ++ export ARA=false 2026-04-17 01:34:12.876895 | orchestrator | ++ ARA=false 2026-04-17 01:34:12.876904 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-17 01:34:12.876908 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-17 01:34:12.876912 | orchestrator | ++ export TEMPEST=true 2026-04-17 01:34:12.876920 | orchestrator | ++ TEMPEST=true 2026-04-17 01:34:12.876924 | orchestrator | ++ export IS_ZUUL=true 2026-04-17 01:34:12.876927 | orchestrator | ++ IS_ZUUL=true 2026-04-17 01:34:12.876932 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:34:12.876937 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.203 2026-04-17 01:34:12.876941 | orchestrator | ++ export EXTERNAL_API=false 2026-04-17 01:34:12.876945 | orchestrator | ++ EXTERNAL_API=false 2026-04-17 01:34:12.876949 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-17 01:34:12.876952 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-17 01:34:12.876956 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-17 01:34:12.876960 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-17 01:34:12.876964 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-17 01:34:12.876968 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-17 01:34:12.876971 | orchestrator | + echo 2026-04-17 01:34:12.876975 | orchestrator | + echo '# Tempest' 2026-04-17 01:34:12.876979 | orchestrator | + echo 2026-04-17 01:34:12.876983 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-17 01:34:12.876987 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-17 01:34:24.095391 | orchestrator | 2026-04-17 01:34:24 | INFO  | Prepare task for execution of tempest. 2026-04-17 01:34:24.165877 | orchestrator | 2026-04-17 01:34:24 | INFO  | Task dc2a44c4-5b0b-49a9-8bed-973d5ef98baf (tempest) was prepared for execution. 2026-04-17 01:34:24.165996 | orchestrator | 2026-04-17 01:34:24 | INFO  | It takes a moment until task dc2a44c4-5b0b-49a9-8bed-973d5ef98baf (tempest) has been started and output is visible here. 2026-04-17 01:35:40.125738 | orchestrator | 2026-04-17 01:35:40.125838 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-17 01:35:40.125849 | orchestrator | 2026-04-17 01:35:40.125856 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-17 01:35:40.125874 | orchestrator | Friday 17 April 2026 01:34:27 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-04-17 01:35:40.125881 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.125889 | orchestrator | 2026-04-17 01:35:40.125896 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-17 01:35:40.125903 | orchestrator | Friday 17 April 2026 01:34:28 +0000 (0:00:00.999) 0:00:01.323 ********** 2026-04-17 01:35:40.125911 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.125917 | orchestrator | 2026-04-17 01:35:40.125923 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-17 01:35:40.125929 | orchestrator | Friday 17 April 2026 01:34:29 +0000 (0:00:01.172) 0:00:02.496 ********** 2026-04-17 01:35:40.125936 | orchestrator | ok: [testbed-manager] 2026-04-17 01:35:40.125943 | orchestrator | 2026-04-17 01:35:40.125950 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-17 01:35:40.125956 | orchestrator | Friday 17 April 2026 01:34:30 +0000 (0:00:00.421) 0:00:02.917 ********** 2026-04-17 01:35:40.125963 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.125969 | orchestrator | 2026-04-17 01:35:40.125975 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-17 01:35:40.126009 | orchestrator | Friday 17 April 2026 01:34:51 +0000 (0:00:21.082) 0:00:24.000 ********** 2026-04-17 01:35:40.126051 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-17 01:35:40.126061 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-17 01:35:40.126068 | orchestrator | 2026-04-17 01:35:40.126075 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-17 01:35:40.126082 | orchestrator | Friday 17 April 2026 01:35:00 +0000 (0:00:09.434) 0:00:33.434 ********** 2026-04-17 01:35:40.126088 | orchestrator | ok: [testbed-manager] => { 2026-04-17 01:35:40.126095 | orchestrator |  "changed": false, 2026-04-17 01:35:40.126102 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:35:40.126108 | orchestrator | } 2026-04-17 01:35:40.126116 | orchestrator | 2026-04-17 01:35:40.126122 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-17 01:35:40.126129 | orchestrator | Friday 17 April 2026 01:35:00 +0000 (0:00:00.153) 0:00:33.588 ********** 2026-04-17 01:35:40.126135 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126142 | orchestrator | 2026-04-17 01:35:40.126149 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-17 01:35:40.126156 | orchestrator | Friday 17 April 2026 01:35:04 +0000 (0:00:03.628) 0:00:37.217 ********** 2026-04-17 01:35:40.126163 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126169 | orchestrator | 2026-04-17 01:35:40.126176 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-17 01:35:40.126182 | orchestrator | Friday 17 April 2026 01:35:06 +0000 (0:00:01.755) 0:00:38.972 ********** 2026-04-17 01:35:40.126189 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126196 | orchestrator | 2026-04-17 01:35:40.126202 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-17 01:35:40.126207 | orchestrator | Friday 17 April 2026 01:35:09 +0000 (0:00:03.453) 0:00:42.426 ********** 2026-04-17 01:35:40.126213 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126220 | orchestrator | 2026-04-17 01:35:40.126225 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-17 01:35:40.126231 | orchestrator | Friday 17 April 2026 01:35:09 +0000 (0:00:00.184) 0:00:42.610 ********** 2026-04-17 01:35:40.126239 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.126246 | orchestrator | 2026-04-17 01:35:40.126253 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-17 01:35:40.126263 | orchestrator | Friday 17 April 2026 01:35:12 +0000 (0:00:02.398) 0:00:45.009 ********** 2026-04-17 01:35:40.126273 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.126278 | orchestrator | 2026-04-17 01:35:40.126284 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-17 01:35:40.126290 | orchestrator | Friday 17 April 2026 01:35:20 +0000 (0:00:08.486) 0:00:53.495 ********** 2026-04-17 01:35:40.126295 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.126301 | orchestrator | 2026-04-17 01:35:40.126308 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-17 01:35:40.126314 | orchestrator | Friday 17 April 2026 01:35:21 +0000 (0:00:00.684) 0:00:54.180 ********** 2026-04-17 01:35:40.126320 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126326 | orchestrator | 2026-04-17 01:35:40.126332 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-17 01:35:40.126338 | orchestrator | Friday 17 April 2026 01:35:22 +0000 (0:00:01.507) 0:00:55.687 ********** 2026-04-17 01:35:40.126344 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126350 | orchestrator | 2026-04-17 01:35:40.126356 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-17 01:35:40.126362 | orchestrator | Friday 17 April 2026 01:35:24 +0000 (0:00:01.537) 0:00:57.225 ********** 2026-04-17 01:35:40.126368 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126374 | orchestrator | 2026-04-17 01:35:40.126380 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-17 01:35:40.126394 | orchestrator | Friday 17 April 2026 01:35:24 +0000 (0:00:00.177) 0:00:57.402 ********** 2026-04-17 01:35:40.126401 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126407 | orchestrator | 2026-04-17 01:35:40.126421 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-17 01:35:40.126427 | orchestrator | Friday 17 April 2026 01:35:24 +0000 (0:00:00.352) 0:00:57.755 ********** 2026-04-17 01:35:40.126433 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-17 01:35:40.126439 | orchestrator | 2026-04-17 01:35:40.126445 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-17 01:35:40.126472 | orchestrator | Friday 17 April 2026 01:35:28 +0000 (0:00:03.900) 0:01:01.655 ********** 2026-04-17 01:35:40.126478 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-17 01:35:40.126482 | orchestrator |  "changed": false, 2026-04-17 01:35:40.126486 | orchestrator |  "msg": "All assertions passed" 2026-04-17 01:35:40.126490 | orchestrator | } 2026-04-17 01:35:40.126494 | orchestrator | 2026-04-17 01:35:40.126498 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-17 01:35:40.126502 | orchestrator | Friday 17 April 2026 01:35:28 +0000 (0:00:00.189) 0:01:01.844 ********** 2026-04-17 01:35:40.126506 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-17 01:35:40.126511 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-17 01:35:40.126515 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:35:40.126518 | orchestrator | 2026-04-17 01:35:40.126595 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-17 01:35:40.126600 | orchestrator | Friday 17 April 2026 01:35:29 +0000 (0:00:00.191) 0:01:02.036 ********** 2026-04-17 01:35:40.126604 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:35:40.126608 | orchestrator | 2026-04-17 01:35:40.126612 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-17 01:35:40.126615 | orchestrator | Friday 17 April 2026 01:35:29 +0000 (0:00:00.147) 0:01:02.184 ********** 2026-04-17 01:35:40.126619 | orchestrator | ok: [testbed-manager] 2026-04-17 01:35:40.126623 | orchestrator | 2026-04-17 01:35:40.126627 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-17 01:35:40.126631 | orchestrator | Friday 17 April 2026 01:35:29 +0000 (0:00:00.464) 0:01:02.648 ********** 2026-04-17 01:35:40.126635 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.126638 | orchestrator | 2026-04-17 01:35:40.126642 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-17 01:35:40.126646 | orchestrator | Friday 17 April 2026 01:35:30 +0000 (0:00:00.874) 0:01:03.522 ********** 2026-04-17 01:35:40.126650 | orchestrator | ok: [testbed-manager] 2026-04-17 01:35:40.126654 | orchestrator | 2026-04-17 01:35:40.126657 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-17 01:35:40.126661 | orchestrator | Friday 17 April 2026 01:35:31 +0000 (0:00:00.410) 0:01:03.933 ********** 2026-04-17 01:35:40.126680 | orchestrator | skipping: [testbed-manager] 2026-04-17 01:35:40.126684 | orchestrator | 2026-04-17 01:35:40.126688 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-17 01:35:40.126692 | orchestrator | Friday 17 April 2026 01:35:31 +0000 (0:00:00.308) 0:01:04.241 ********** 2026-04-17 01:35:40.126696 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-17 01:35:40.126700 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-17 01:35:40.126704 | orchestrator | 2026-04-17 01:35:40.126708 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-17 01:35:40.126712 | orchestrator | Friday 17 April 2026 01:35:39 +0000 (0:00:07.737) 0:01:11.979 ********** 2026-04-17 01:35:40.126718 | orchestrator | changed: [testbed-manager] 2026-04-17 01:35:40.126731 | orchestrator | 2026-04-17 01:35:40.126741 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-17 01:35:40.126750 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-17 01:35:40.126757 | orchestrator | 2026-04-17 01:35:40.126763 | orchestrator | 2026-04-17 01:35:40.126770 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-17 01:35:40.126776 | orchestrator | Friday 17 April 2026 01:35:40 +0000 (0:00:01.019) 0:01:12.998 ********** 2026-04-17 01:35:40.126781 | orchestrator | =============================================================================== 2026-04-17 01:35:40.126788 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.08s 2026-04-17 01:35:40.126794 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 9.43s 2026-04-17 01:35:40.126801 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.49s 2026-04-17 01:35:40.126807 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.74s 2026-04-17 01:35:40.126820 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.90s 2026-04-17 01:35:40.126826 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.63s 2026-04-17 01:35:40.126830 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.45s 2026-04-17 01:35:40.126834 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.40s 2026-04-17 01:35:40.126838 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.76s 2026-04-17 01:35:40.126842 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.54s 2026-04-17 01:35:40.126846 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.51s 2026-04-17 01:35:40.126849 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.17s 2026-04-17 01:35:40.126853 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.02s 2026-04-17 01:35:40.126857 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.00s 2026-04-17 01:35:40.126860 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.87s 2026-04-17 01:35:40.126864 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.68s 2026-04-17 01:35:40.126868 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.46s 2026-04-17 01:35:40.126878 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.42s 2026-04-17 01:35:40.365132 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.41s 2026-04-17 01:35:40.365227 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.35s 2026-04-17 01:35:40.558936 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-17 01:35:40.561890 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-17 01:35:40.564234 | orchestrator | 2026-04-17 01:35:40.564297 | orchestrator | ## IDENTITY (API) 2026-04-17 01:35:40.564303 | orchestrator | 2026-04-17 01:35:40.564308 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-17 01:35:40.564313 | orchestrator | + echo 2026-04-17 01:35:40.564317 | orchestrator | + echo '## IDENTITY (API)' 2026-04-17 01:35:40.564321 | orchestrator | + echo 2026-04-17 01:35:40.564325 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-17 01:35:40.564330 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-17 01:35:40.565037 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-17 01:35:40.566634 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-17 01:35:40.569812 | orchestrator | + tee -a /opt/tempest/20260417-0135.log 2026-04-17 01:35:42.750990 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-17 01:35:42.751100 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-17 01:35:42.751114 | orchestrator | we strongly recommend against using it for new projects. 2026-04-17 01:35:42.751130 | orchestrator | 2026-04-17 01:35:42.751141 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-17 01:35:42.751149 | orchestrator | framework. For more detail see 2026-04-17 01:35:42.751159 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-17 01:35:42.751168 | orchestrator | 2026-04-17 01:35:42.751177 | orchestrator | __import__(import_str) 2026-04-17 01:35:44.273973 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-17 01:35:44.274099 | orchestrator | Did you mean one of these? 2026-04-17 01:35:44.274112 | orchestrator | help 2026-04-17 01:35:44.274120 | orchestrator | init 2026-04-17 01:35:44.671670 | orchestrator | ERROR 2026-04-17 01:35:44.672011 | orchestrator | { 2026-04-17 01:35:44.672072 | orchestrator | "delta": "0:01:32.190485", 2026-04-17 01:35:44.672111 | orchestrator | "end": "2026-04-17 01:35:44.617979", 2026-04-17 01:35:44.672145 | orchestrator | "msg": "non-zero return code", 2026-04-17 01:35:44.672177 | orchestrator | "rc": 2, 2026-04-17 01:35:44.672207 | orchestrator | "start": "2026-04-17 01:34:12.427494" 2026-04-17 01:35:44.672236 | orchestrator | } failure 2026-04-17 01:35:44.680008 | 2026-04-17 01:35:44.680098 | PLAY RECAP 2026-04-17 01:35:44.680160 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-04-17 01:35:44.680192 | 2026-04-17 01:35:44.957184 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-17 01:35:44.958577 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-17 01:35:45.712257 | 2026-04-17 01:35:45.712471 | PLAY [Post output play] 2026-04-17 01:35:45.728459 | 2026-04-17 01:35:45.728590 | LOOP [stage-output : Register sources] 2026-04-17 01:35:45.807184 | 2026-04-17 01:35:45.807695 | TASK [stage-output : Check sudo] 2026-04-17 01:35:46.661429 | orchestrator | sudo: a password is required 2026-04-17 01:35:46.853061 | orchestrator | ok: Runtime: 0:00:00.011790 2026-04-17 01:35:46.868284 | 2026-04-17 01:35:46.868468 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-17 01:35:46.908756 | 2026-04-17 01:35:46.909054 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-17 01:35:46.986255 | orchestrator | ok 2026-04-17 01:35:46.995338 | 2026-04-17 01:35:46.995532 | LOOP [stage-output : Ensure target folders exist] 2026-04-17 01:35:47.487216 | orchestrator | ok: "docs" 2026-04-17 01:35:47.487581 | 2026-04-17 01:35:47.781892 | orchestrator | ok: "artifacts" 2026-04-17 01:35:48.073526 | orchestrator | ok: "logs" 2026-04-17 01:35:48.087770 | 2026-04-17 01:35:48.087911 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-17 01:35:48.127337 | 2026-04-17 01:35:48.127679 | TASK [stage-output : Make all log files readable] 2026-04-17 01:35:48.470719 | orchestrator | ok 2026-04-17 01:35:48.479609 | 2026-04-17 01:35:48.479738 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-17 01:35:48.514562 | orchestrator | skipping: Conditional result was False 2026-04-17 01:35:48.530732 | 2026-04-17 01:35:48.530908 | TASK [stage-output : Discover log files for compression] 2026-04-17 01:35:48.555459 | orchestrator | skipping: Conditional result was False 2026-04-17 01:35:48.572365 | 2026-04-17 01:35:48.572546 | LOOP [stage-output : Archive everything from logs] 2026-04-17 01:35:48.617207 | 2026-04-17 01:35:48.617379 | PLAY [Post cleanup play] 2026-04-17 01:35:48.626079 | 2026-04-17 01:35:48.626187 | TASK [Set cloud fact (Zuul deployment)] 2026-04-17 01:35:48.683501 | orchestrator | ok 2026-04-17 01:35:48.694990 | 2026-04-17 01:35:48.695103 | TASK [Set cloud fact (local deployment)] 2026-04-17 01:35:48.729170 | orchestrator | skipping: Conditional result was False 2026-04-17 01:35:48.742775 | 2026-04-17 01:35:48.742940 | TASK [Clean the cloud environment] 2026-04-17 01:35:49.463427 | orchestrator | 2026-04-17 01:35:49 - clean up servers 2026-04-17 01:35:50.239316 | orchestrator | 2026-04-17 01:35:50 - testbed-manager 2026-04-17 01:35:50.323879 | orchestrator | 2026-04-17 01:35:50 - testbed-node-4 2026-04-17 01:35:50.417037 | orchestrator | 2026-04-17 01:35:50 - testbed-node-3 2026-04-17 01:35:50.503826 | orchestrator | 2026-04-17 01:35:50 - testbed-node-2 2026-04-17 01:35:50.589698 | orchestrator | 2026-04-17 01:35:50 - testbed-node-0 2026-04-17 01:35:50.684414 | orchestrator | 2026-04-17 01:35:50 - testbed-node-1 2026-04-17 01:35:50.777082 | orchestrator | 2026-04-17 01:35:50 - testbed-node-5 2026-04-17 01:35:50.872817 | orchestrator | 2026-04-17 01:35:50 - clean up keypairs 2026-04-17 01:35:50.893038 | orchestrator | 2026-04-17 01:35:50 - testbed 2026-04-17 01:35:50.919476 | orchestrator | 2026-04-17 01:35:50 - wait for servers to be gone 2026-04-17 01:36:04.026826 | orchestrator | 2026-04-17 01:36:04 - clean up ports 2026-04-17 01:36:04.206611 | orchestrator | 2026-04-17 01:36:04 - 141ff78f-5dd6-4a7c-9494-dc4f0b5c9ad3 2026-04-17 01:36:04.489176 | orchestrator | 2026-04-17 01:36:04 - 24f90b5f-9df6-4285-bd15-acbe821e7402 2026-04-17 01:36:04.779998 | orchestrator | 2026-04-17 01:36:04 - 32cafdb3-4be2-4cbb-8c42-a3cdbf2f47cd 2026-04-17 01:36:04.979671 | orchestrator | 2026-04-17 01:36:04 - 42c2da1b-03c7-4d49-a8e7-6aae520e6b88 2026-04-17 01:36:05.186006 | orchestrator | 2026-04-17 01:36:05 - 5e1a1693-62ef-432e-9fe7-f610cf0ff917 2026-04-17 01:36:05.393899 | orchestrator | 2026-04-17 01:36:05 - 6f1cf3a5-7394-4c2c-809d-7a568e051c83 2026-04-17 01:36:05.593182 | orchestrator | 2026-04-17 01:36:05 - f515c78c-c86f-4160-a176-397fb6511f32 2026-04-17 01:36:05.990141 | orchestrator | 2026-04-17 01:36:05 - clean up volumes 2026-04-17 01:36:06.128241 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-3-node-base 2026-04-17 01:36:06.168557 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-0-node-base 2026-04-17 01:36:06.210533 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-1-node-base 2026-04-17 01:36:06.254760 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-4-node-base 2026-04-17 01:36:06.296125 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-5-node-base 2026-04-17 01:36:06.340657 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-2-node-base 2026-04-17 01:36:06.404651 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-manager-base 2026-04-17 01:36:06.446793 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-6-node-3 2026-04-17 01:36:06.493752 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-4-node-4 2026-04-17 01:36:06.534574 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-7-node-4 2026-04-17 01:36:06.577234 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-0-node-3 2026-04-17 01:36:06.622292 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-2-node-5 2026-04-17 01:36:06.660807 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-1-node-4 2026-04-17 01:36:06.699914 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-5-node-5 2026-04-17 01:36:06.743415 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-3-node-3 2026-04-17 01:36:06.783000 | orchestrator | 2026-04-17 01:36:06 - testbed-volume-8-node-5 2026-04-17 01:36:06.824326 | orchestrator | 2026-04-17 01:36:06 - disconnect routers 2026-04-17 01:36:06.970252 | orchestrator | 2026-04-17 01:36:06 - testbed 2026-04-17 01:36:07.907725 | orchestrator | 2026-04-17 01:36:07 - clean up subnets 2026-04-17 01:36:07.964687 | orchestrator | 2026-04-17 01:36:07 - subnet-testbed-management 2026-04-17 01:36:08.151983 | orchestrator | 2026-04-17 01:36:08 - clean up networks 2026-04-17 01:36:08.324352 | orchestrator | 2026-04-17 01:36:08 - net-testbed-management 2026-04-17 01:36:08.644897 | orchestrator | 2026-04-17 01:36:08 - clean up security groups 2026-04-17 01:36:08.683898 | orchestrator | 2026-04-17 01:36:08 - testbed-management 2026-04-17 01:36:08.793854 | orchestrator | 2026-04-17 01:36:08 - testbed-node 2026-04-17 01:36:08.930211 | orchestrator | 2026-04-17 01:36:08 - clean up floating ips 2026-04-17 01:36:08.962254 | orchestrator | 2026-04-17 01:36:08 - 81.163.192.203 2026-04-17 01:36:09.346434 | orchestrator | 2026-04-17 01:36:09 - clean up routers 2026-04-17 01:36:09.454280 | orchestrator | 2026-04-17 01:36:09 - testbed 2026-04-17 01:36:10.807245 | orchestrator | ok: Runtime: 0:00:21.287973 2026-04-17 01:36:10.812117 | 2026-04-17 01:36:10.812287 | PLAY RECAP 2026-04-17 01:36:10.812415 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-17 01:36:10.812513 | 2026-04-17 01:36:10.950875 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-17 01:36:10.953459 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-17 01:36:11.715816 | 2026-04-17 01:36:11.715984 | PLAY [Cleanup play] 2026-04-17 01:36:11.732093 | 2026-04-17 01:36:11.732231 | TASK [Set cloud fact (Zuul deployment)] 2026-04-17 01:36:11.788339 | orchestrator | ok 2026-04-17 01:36:11.798077 | 2026-04-17 01:36:11.798235 | TASK [Set cloud fact (local deployment)] 2026-04-17 01:36:11.832750 | orchestrator | skipping: Conditional result was False 2026-04-17 01:36:11.849036 | 2026-04-17 01:36:11.849190 | TASK [Clean the cloud environment] 2026-04-17 01:36:13.052713 | orchestrator | 2026-04-17 01:36:13 - clean up servers 2026-04-17 01:36:13.573668 | orchestrator | 2026-04-17 01:36:13 - clean up keypairs 2026-04-17 01:36:13.595165 | orchestrator | 2026-04-17 01:36:13 - wait for servers to be gone 2026-04-17 01:36:13.634869 | orchestrator | 2026-04-17 01:36:13 - clean up ports 2026-04-17 01:36:13.726846 | orchestrator | 2026-04-17 01:36:13 - clean up volumes 2026-04-17 01:36:13.799216 | orchestrator | 2026-04-17 01:36:13 - disconnect routers 2026-04-17 01:36:13.821750 | orchestrator | 2026-04-17 01:36:13 - clean up subnets 2026-04-17 01:36:13.845778 | orchestrator | 2026-04-17 01:36:13 - clean up networks 2026-04-17 01:36:13.983341 | orchestrator | 2026-04-17 01:36:13 - clean up security groups 2026-04-17 01:36:14.019268 | orchestrator | 2026-04-17 01:36:14 - clean up floating ips 2026-04-17 01:36:14.046922 | orchestrator | 2026-04-17 01:36:14 - clean up routers 2026-04-17 01:36:14.394249 | orchestrator | ok: Runtime: 0:00:01.433571 2026-04-17 01:36:14.396868 | 2026-04-17 01:36:14.396975 | PLAY RECAP 2026-04-17 01:36:14.397048 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-17 01:36:14.397087 | 2026-04-17 01:36:14.550619 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-17 01:36:14.551718 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-17 01:36:15.303118 | 2026-04-17 01:36:15.303282 | PLAY [Base post-fetch] 2026-04-17 01:36:15.319093 | 2026-04-17 01:36:15.319237 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-17 01:36:15.384550 | orchestrator | skipping: Conditional result was False 2026-04-17 01:36:15.396529 | 2026-04-17 01:36:15.396708 | TASK [fetch-output : Set log path for single node] 2026-04-17 01:36:15.442621 | orchestrator | ok 2026-04-17 01:36:15.450696 | 2026-04-17 01:36:15.450860 | LOOP [fetch-output : Ensure local output dirs] 2026-04-17 01:36:15.953068 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/logs" 2026-04-17 01:36:16.230641 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/artifacts" 2026-04-17 01:36:16.507826 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cfb67c0e06f4403695f75a6ddf5ac11e/work/docs" 2026-04-17 01:36:16.529528 | 2026-04-17 01:36:16.529671 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-17 01:36:17.446702 | orchestrator | changed: .d..t...... ./ 2026-04-17 01:36:17.447079 | orchestrator | changed: All items complete 2026-04-17 01:36:17.447129 | 2026-04-17 01:36:18.179534 | orchestrator | changed: .d..t...... ./ 2026-04-17 01:36:18.925201 | orchestrator | changed: .d..t...... ./ 2026-04-17 01:36:18.944833 | 2026-04-17 01:36:18.944983 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-17 01:36:18.982786 | orchestrator | skipping: Conditional result was False 2026-04-17 01:36:18.985606 | orchestrator | skipping: Conditional result was False 2026-04-17 01:36:19.011877 | 2026-04-17 01:36:19.012025 | PLAY RECAP 2026-04-17 01:36:19.012104 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-17 01:36:19.012142 | 2026-04-17 01:36:19.146776 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-17 01:36:19.149346 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-17 01:36:19.905014 | 2026-04-17 01:36:19.905175 | PLAY [Base post] 2026-04-17 01:36:19.919691 | 2026-04-17 01:36:19.919840 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-17 01:36:20.942771 | orchestrator | changed 2026-04-17 01:36:20.952653 | 2026-04-17 01:36:20.952786 | PLAY RECAP 2026-04-17 01:36:20.952862 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-17 01:36:20.952931 | 2026-04-17 01:36:21.072613 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-17 01:36:21.076137 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-17 01:36:21.907593 | 2026-04-17 01:36:21.907761 | PLAY [Base post-logs] 2026-04-17 01:36:21.918431 | 2026-04-17 01:36:21.918588 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-17 01:36:22.394563 | localhost | changed 2026-04-17 01:36:22.411651 | 2026-04-17 01:36:22.411832 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-17 01:36:22.450564 | localhost | ok 2026-04-17 01:36:22.456635 | 2026-04-17 01:36:22.456784 | TASK [Set zuul-log-path fact] 2026-04-17 01:36:22.473855 | localhost | ok 2026-04-17 01:36:22.491614 | 2026-04-17 01:36:22.491787 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-17 01:36:22.521164 | localhost | ok 2026-04-17 01:36:22.528305 | 2026-04-17 01:36:22.528489 | TASK [upload-logs : Create log directories] 2026-04-17 01:36:23.013334 | localhost | changed 2026-04-17 01:36:23.020922 | 2026-04-17 01:36:23.021054 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-17 01:36:23.503534 | localhost -> localhost | ok: Runtime: 0:00:00.007012 2026-04-17 01:36:23.512716 | 2026-04-17 01:36:23.512912 | TASK [upload-logs : Upload logs to log server] 2026-04-17 01:36:24.083612 | localhost | Output suppressed because no_log was given 2026-04-17 01:36:24.085953 | 2026-04-17 01:36:24.086077 | LOOP [upload-logs : Compress console log and json output] 2026-04-17 01:36:24.146879 | localhost | skipping: Conditional result was False 2026-04-17 01:36:24.163650 | localhost | skipping: Conditional result was False 2026-04-17 01:36:24.167688 | 2026-04-17 01:36:24.167802 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-17 01:36:24.229340 | localhost | skipping: Conditional result was False 2026-04-17 01:36:24.229948 | 2026-04-17 01:36:24.233787 | localhost | skipping: Conditional result was False 2026-04-17 01:36:24.247768 | 2026-04-17 01:36:24.248030 | LOOP [upload-logs : Upload console log and json output]